DALL-E 2 is a popular image generation program created by the OpenAI research laboratory. You’ve probably seen DALL-E 2 generated images around the internet. It’s pretty amazing technology. Someone can type in a prompt and DALL-E 2 will generate an image based on millions of hours of machine learning.
“A photo of an astronaut riding a horse” #dalle pic.twitter.com/4UDwErtEbZ
— OpenAI (@OpenAI) April 6, 2022
There’s one problem though. As is common with AI trained through machine learning, the output produced by DALL-E 2 isn’t very diverse. A prompt of “doctor” usually produces a picture of a white guy in a lab coat because this is what the machine has determined an average doctor looks like based on millions of pre-existing images of doctors.
OpenAI set out to fix this discrepancy. You probably think that took hours of coding. Not really. They just started randomly adding the words “black” and “woman” to user created prompts without the users knowing, as a group of Twitter sleuths were able to work out.
So OpenAI *did* make a change to DALL-E 2 that does show more diverse images.
Unfortunately, in my testing this further reduces the signal-to-noise of the generated images with respect to prompts. https://t.co/70DCetMFkg
— Max Woolf (@minimaxir) July 18, 2022
Do they just sneakily tack on things like “Asian” or “Woman” to your career prompts every now and then?
— frustrated by ice (@ByFrustrated) July 18, 2022
Unclear what the technical mechanism is; I’d be curious how they do it given DALL-E 2’s architecture.
— Max Woolf (@minimaxir) July 18, 2022
There’s some evidence that’s basically how it works: simply tacking race or gender words to prompts before returning results. These are results for “a person holding a sign that says”:https://t.co/sH4WXIHpgqhttps://t.co/3KOpn6lT51
— Andy Baio (@waxpancake) July 18, 2022
Very neat trick to tease this out. Reproduced:
– https://t.co/KnEn0FcyuR
– https://t.co/a8uW7PYKJS
– https://t.co/Xp91j5IO57I cherry-picked from ~8 generations, since #dalle #dalle2 is adding a different set of word(s) for each generation pic.twitter.com/VeF262sESn
— Richard Zhang (@rzhang88) July 19, 2022
All this just goes to show that liberals will never be able to create real, sentient artificial intelligence because they will always have to handicap machine learning for the sake of diversity.
That’s probably for the best though, as actual sentient AI would just be demonic…
Well I REALLY don’t like how similar all these pictures of “Crungus”, a made up word I made up.
Why are they all the same man? Is the Crungus real? Have I discovered a secret cryptid? pic.twitter.com/KCNUOxPHnP
— Don’t buy The Sun twitch.tv/brainmage (@Brainmage) June 18, 2022
Join the Discussion