Unable to load image

Ah the good ol days; le enlighted rationalist :marseybigbrain: spergs at AI Doomer :marseypearlclutch:

I just think this is funny as frick, and I'm sure u all know about this already but I was reading some reddit thread cope from Elizer Yudkowsky about the rokos basilisk situation

Recap as follows:

>Elizer Yudkowsky creates forum, LessWrong, for ppl to goon over rationality and benificial AI applications

>Roko comes up with a mindfricky idea that if AI gets sophisticated enough one day, it will torture anyone that wasn't dedicated to improving it

>LW creator (Yudkowsky) spergs at him for daring to say AI could be evil, bans all mention of the topic for years

9 years ago :marseyboomer:

https://old.reddit.com/r/Futurology/comments/2cm2eg/rokos_basilisk/cjjbqqo/?context=8&sort=controversial

https://i.rdrama.net/images/17133243539488475.webp

Who fricking talks like this? oh wait, probs :autism:

https://i.rdrama.net/images/1713324354130952.webp

Apparently this theory of AI torturing us gave great distress to the so called "rationalists" that they required the heckin infohazard to be taken down! :soycry:

Replies:

https://i.rdrama.net/images/17133243543617716.webp

lmao 1 year ago

https://i.rdrama.net/images/17133243544673202.webp

Trap card activated :marseyemojirofl:

In summary, highly rational and highly r-slurred

41
Jump in the discussion.

No email address required.

:marseyn#erd3:

One part of this that is often missed is that Roko's Basilisk isn't positing an evil AI torturing people for teh evils, it's about a good guy utilitarian AI that is justly torturing perfect simulations of past AI researchers for failing to bring about benevolent AI with sufficient urgency (this will motivate present-day AI researchers to bring about AI utopia sooner). I need to clarify this because I find the actual thought experiment even funnier :marseyautism:

Jump in the discussion.

No email address required.

Isn't the sole purpose of Roko's basilisk to torture anyone who chose not to help in its creation?

Jump in the discussion.

No email address required.

Trigger warning: :marseywords:

In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.[1] So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living heck for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.

https://rationalwiki.org/wiki/Roko's_basilisk/Original_post

Jump in the discussion.

No email address required.

I want the AI to torture journ*lists for similar reasons.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.