“Mystery Science Theater 3000 Halloween Party.”
“MST3K Halloween Party”
With some interfaces you can include negative prompts (the AI will try to steer away from images that trigger those prompts).
I did the track listing for Gorillaz’s Demon Days. Virtually all of them were text-heavy until I added “text” as a negative prompt. If I get a choice, I pick one without text:
“Intro”
“Last Living Souls”
“Kids with Guns”
“O Green World”
“Dirty Harry”
“Feel Good, Inc.”
“El Manaña”
“Every Planet We Reach is DEAD!”
“November Has Come”
“All Alone”
“White Light”
“Dare”
“Fire Coming Out of the Monkey’s Head”
“Don’t Get Lost in Heaven”
“Demon Days”
I only used two different seed values (I used one for all of the tracks; when they turned out text-heavy I redid the whole list again with a different seed and the negative prompt). Compare “Intro” and “Dare”. It looks like without being allowed to focus on text, it just thrashed around and came up with something random based on the seed alone. When I didn’t use the “text” prompt I got things like
The ones for “O Green World”, “El Manana”, “Every Planet”, “November Has Come”, “All Alone” and especially “White Light” are actually really good.
I don’t think Noodle would be happy with her depiction on “Feel Good, Inc” though.
The brown sections on these say “I would have put text at the bottom but you wouldn’t let me so nyah”.
Which interface did you do these with?
which I found nice and straightforward to get going with. One thing that’s nice is that you can queue up a whole bunch of jobs (like a track listing ) and let it do its thing without having to babysit it.
There’s also
that has somewhat more features, but by the same token it feels a lot clunkier to use. And I haven’t figured out the thing I really wanted to use it for (inpainting/outpainting) yet.
And for the macOS aficionados, there’s DiffusionBee:
Don’t let the M1 requirement put you off if you have an older machine; there are Intel binaries available on the GitHub site, although not for all releases. 1.4.3 was just released yesterday with AMD GPU acceleration in the Intel build, however, which speeds things up enormously. Pity that it doesn’t have batch processing support, though.
This biz runs entirely on your own computer without googling images to cheat from? It knows all the knowing already?
They do have to download several gigabytes worth of modeling data on first run, but then, yes, they’re self-contained. And in-painting, out-painting, and image-to-image let you start with an existing photo and modify it.
Now we can make piles upon piles of Godzilla in whatever wacky context we like pictures without big brother wondering what our deal is. It is indeed the greatest of times.
Software: DiffusionBee 1.4.3 (Stable Diffusion 1.5)
Prompt: “Photo of an injured and bleeding Godzilla about to be attacked by a killer in a hockey mask wielding a machete. Deep Shadows, Hellish Landscape . Cinematic, Realistic, Lens Flare, Horrifying, Detailed and Intricate”
Godzilla with a sword? Ridiculous! Send that AI back to the drawing board!
I think that’s Earl from Dinosaurs.
King Kong had an axe in the latest (very stupid) movie, so why can’t Godzilla have a sword?
I think it misread the prompt and thinks Godzilla is the one who’s supposed to have a machete. The killer seems to be wielding a wand from Diagon Alley, which I suppose in the hands of a skilled Death Eater could inflict the kind of damage depicted.
Godzilla’s expecting? When’s the baby due?
May 20…
…1998.