Background: Dalle2 "improved" the diversity of their prompts by silently adding diversity keywords
tldr: it's the diversity stuff. Switch "cowboy" to "cowgirl", which would disable the diversity stuff because it's now explicitly asking for a 'girl', and OP's prompt works perfectly.
And turns out that, like here, if we mess around with trying to trigger or disable the diversity stuff, we can get out fine samples; the trigger word appears to be... 'basketball'! If 'basketball' is in the prompt and no identity-related keywords like 'black' are, then the full diversity filter will be applied and will destroy the results. I have no idea why 'basketball' would be a key term here, but perhaps basketball's just so strongly associated with African-Americans that it got included somehow, such as a CLIP embedding distance?
Jump in the discussion.
No email address required.
Imagine having the most groundbreaking technology ever and cucking it like this. I couldn't live with myself if I did that
Jump in the discussion.
No email address required.
AI ethicists are truly the biggest block in the path to the singularity, and they aren't even trying.
Jump in the discussion.
No email address required.
We can only hope that Skynet will make a gruesome example of them.
Jump in the discussion.
No email address required.
https://wiki.lesswrong.com/wiki/Roko%27s_basilisk
Lesswrong found this out over 10 years ago, if you are looking for drama the site's head janny and other users unironically started freaking out (see the "Topic moderation and response" section) and tried to shut it down but were unsuccessful.
Dramatic quotes from Lesswrong users:
"There is apparently a idea so horrible, so utterly Cuthulian in nature that it needs to be censored for our sanity. Simply knowing about it makes it more likely of becoming true in the real world. Elizer Yudkwosky and the other great rationalist keep us safe by deleting any posts with this one evil idea. Yes they really do believe that. Occasionally a poster will complain off topic about the idea being deleted."
"If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. (It is ironic that it’s only a mental health risk to those who have already bought into Yudkowsky’s thinking.) Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it."
Gwern, one of the posters in the OP's Dall-E reddit thread, also got involved:
https://old.reddit.com/r/LessWrong/comments/17y819/comment/c8bbcy4/
Jump in the discussion.
No email address required.
That's great and all, but I asked for my burger without cheese.
Jump in the discussion.
No email address required.
More options
Context
I think that EY overreacted about the basilisk mostly because it validated his self-worth: imagine spending decades discussing cognitohazards and shit, building a community, all that, but with zero actual examples, so it's actually pretty hard to tell the difference between you and star wars fans talking about midichlorians. And then suddenly! a legit-seeming cognitohazard and that means that you weren't LARPing!
Many such cases, I think that a lot of other edgy stuff (like the recent trend of discussing "the pivotal action" like building the first powerful enough non-general AI to design nanobots that destroy all GPUs in the world) stems from the same logic: if the problem is serious enough then we might have to abandon cooperation, open inquiry and all those Enlightenment niceties. Therefore if I'm forced to abandon them then it's because the problem is serious enough and that means that I'm not a wanker.
On the other hand, there are depths to the basilisk. It's not a variant of the Pascal's Wager, it's actually a question about utilitarianism: if you were in charge, why wouldn't you build a basilisk? It's going to extort some nerds in return of saving ~150k lives per day that the AI is started sooner. Surely there's some optimal nonzero number of nerd-years of torture that maximizes the utility?
I actually bullied one /r/themotte poster into what seemed to be genuine hysterics with that. He'd be, like, NO NO NO ONE SANE IS GOING TO BUILD AN AI THAT TORTURES PEOPLE and I would link him to EY's comment about dust specks where he explicitly says that not allowing an AI to torture people is an extremely bad idea and he would melt down and bow out of the discussion incoherently.
Jump in the discussion.
No email address required.
That's great and all, but I asked for my burger without cheese.
Jump in the discussion.
No email address required.
what kind of monster orders a hamburger instead of a cheeseburger?
Jump in the discussion.
No email address required.
Fast food cheese is worse than nothing.
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
my wife lol
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
This is the man who will save humanity. This is our John Connor.
Jump in the discussion.
No email address required.
"I will delete comments suggesting diet or exercise". This man warns of the dangers of what AI could do, but sounds like exactly the sort of person that would force AIs to produce results that only match his opinion, and then enforce that AI's decisions on other people.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Motte posters have completely devolved into esoteric utilitarianism at this point. Most have a distinct inability to understand other humans opinions, and as such, any sort of context or point on that front is just completely lost on them.
Jump in the discussion.
No email address required.
More options
Context
It's great to add some of these ideas to your general mental framework but it still annoys me how the 'rationalist' community can fall for shit like this.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Jump in the discussion.
No email address required.
More options
Context
no fricking way actual adults were scared of this shit
Jump in the discussion.
No email address required.
They were. The rat community is full of incredibly overbearing dweebs with software engineering jobs that partake in hobby philosophy.
It's average redditors from ron paul days, who never figured out that they were idiots, and just decided to go full on. "Hmmm, I am smarter than you, and intelligence is my only criteria for the world."
As a result of their chronic inability to interact with other humans, they have devolved into their own weird anxiety doom posting.
Jump in the discussion.
No email address required.
I'll take those over the redditors of that same ilk that evolved into AHS-tier twinks tbh
Jump in the discussion.
No email address required.
The conversations seem interesting for a very short time, and then it becomes rapidly apparent that it's stupid people thinking they are smart.
Both are uninteresting, but at least the first gives us drama.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
He even claimed it made other members of the forum have a mental breakdown, and that reposting it could be dangerous to their mental health.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
CHINA WILL DO WHAT AMERIDONT
Jump in the discussion.
No email address required.
First with gene editing, then with AI. Remember the ethicist seethefest over He Jiankui? You can't kill an idea.
Jump in the discussion.
No email address required.
I can't wait for my Jinteki Nisei prescient clones
Jump in the discussion.
No email address required.
More options
Context
Within a decade we're going to see the Chinese announce they've found all the genes that code for intelligence and yes, there are differences along ethnic lines. Something that most rational geneticists in the West already agree with silently but can't say out loud.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
A while back there was a hubbub about (I think) a GPT3 model trained on 4chan that would call you a BIPOC in response to a polite greeting lmao.
Jump in the discussion.
No email address required.
Some YouTuber made it, and had it post on 4chan extremely quickly, people only found out because it used the seychelles flag, which is normally very rare on the website.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
This must have been how Soviet engineers felt when they had to tack a bunch of references to Marx to their work
Jump in the discussion.
No email address required.
More options
Context
If there was even a single based researcher still working for OAI the DALLE-2 model would've leaked by now.
Jump in the discussion.
No email address required.
More options
Context
More options
Context