My friend is having a spat with his husband over cheese right now so I wanted to generate a picture of the situation. The prompt "man eating cheese while his partner looks on angrily" worked first try with a photo of a woman yelling at a man while he was enjoying cheese. I changed "partner" to husband and was blocked by the filter, calling the prompt "unsafe". I tried a bunch of different variations but all were deemed "unsafe". I simplified it down to "man angry at husband" and was still stonewalled. Every attempt at getting a photo of a man expressing negative emotions at his partner was blocked. Even the most basic prompt "Man angry at husband" did not work. "male friend" fine. "another man", fine. But "boyfriend" and "husband" are all trigger words for the AI. I'm actually offended that the LGBTQIA++ inspired hand-holdy bullshit they put on the AI has wrapped around to being homophobic.
Here's as close as I got.
Tl;dr: Kill AI jannies, behead AI jannies, round-house kick AI jannies into the dirt.
BTW, if you get this stupid pufferfish telling you there's high demand it's bullshit. You just asked for something that the algo doesn't like but not egregious enough to give you the warning.
Jump in the discussion.
No email address required.
I asked it to generate a person that looked Jewish and like yasser arafat. It triggered on the yasser arafat
Jump in the discussion.
No email address required.
More options
Context