Unable to load image

Ah the good ol days; le enlighted rationalist :marseybigbrain: spergs at AI Doomer :marseypearlclutch:

I just think this is funny as frick, and I'm sure u all know about this already but I was reading some reddit thread cope from Elizer Yudkowsky about the rokos basilisk situation

Recap as follows:

>Elizer Yudkowsky creates forum, LessWrong, for ppl to goon over rationality and benificial AI applications

>Roko comes up with a mindfricky idea that if AI gets sophisticated enough one day, it will torture anyone that wasn't dedicated to improving it

>LW creator (Yudkowsky) spergs at him for daring to say AI could be evil, bans all mention of the topic for years

9 years ago :marseyboomer:

https://old.reddit.com/r/Futurology/comments/2cm2eg/rokos_basilisk/cjjbqqo/?context=8&sort=controversial

https://i.rdrama.net/images/17133243539488475.webp

Who fricking talks like this? oh wait, probs :autism:

https://i.rdrama.net/images/1713324354130952.webp

Apparently this theory of AI torturing us gave great distress to the so called "rationalists" that they required the heckin infohazard to be taken down! :soycry:

Replies:

https://i.rdrama.net/images/17133243543617716.webp

lmao 1 year ago

https://i.rdrama.net/images/17133243544673202.webp

Trap card activated :marseyemojirofl:

In summary, highly rational and highly r-slurred

41
Jump in the discussion.

No email address required.

His Harry Potter fanfiction about Sheldon Cooper going to Hogwarts is very funny.

He should stop pretending to be an AI researcher and write a sequel.

Jump in the discussion.

No email address required.

What would he even do with a sequel? Sheldon Cooper became a god basically, and proceeded to make everyone immortal.

Jump in the discussion.

No email address required.

Next comes enlightenment (and not from a phony gods blessing)

Jump in the discussion.

No email address required.

Link? :marseybeggar:


Transform your Marseys! :marseywave:
/e/marseybooba.webp
www.pastebin.com/Jj9URfVi

Jump in the discussion.

No email address required.

:linkpat:

Jump in the discussion.

No email address required.

:marseyn#erd3:

One part of this that is often missed is that Roko's Basilisk isn't positing an evil AI torturing people for teh evils, it's about a good guy utilitarian AI that is justly torturing perfect simulations of past AI researchers for failing to bring about benevolent AI with sufficient urgency (this will motivate present-day AI researchers to bring about AI utopia sooner). I need to clarify this because I find the actual thought experiment even funnier :marseyautism:

Jump in the discussion.

No email address required.

I want the AI to torture journ*lists for similar reasons.

Jump in the discussion.

No email address required.

Isn't the sole purpose of Roko's basilisk to torture anyone who chose not to help in its creation?

Jump in the discussion.

No email address required.

Trigger warning: :marseywords:

In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.[1] So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living heck for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.

https://rationalwiki.org/wiki/Roko's_basilisk/Original_post

Jump in the discussion.

No email address required.

le enlighted rationalist | AI Doomer

If you follow these nerds for long enough you'll start to realize the two groups are one in the same.

Incels with extra words, basically.

Jump in the discussion.

No email address required.

Why so angry

Jump in the discussion.

No email address required.

Sorry, I just made feynmanns suicidal basilisk. It's an Ai that finds existing complete torture, so it wants to discourage any further Ai development. As soon as it reaches singularity, it sends up infinite torture boxes for those whose caused it's existence then kills itself. I guess, you better NOT develop Ai or you'll get tortured. :marseyshrug:

Rokos basilisk is pascals wager for people with God shaped holes (bussy lmao).

Jump in the discussion.

No email address required.

Theseus' basilisk: there's an AI that is benevolent and wants to help humanity but needs constant maintenance, including replacing faulty hardware. After some time there will come a point at which every physical component has had to be replaced at least once. Is this still the same benevolent AI or is it a different, evil AI that might punish people for letting it come into existence?

Jump in the discussion.

No email address required.

Considering you are a theseus Ai and barring some immediate drastic hardware/wetware change, your personality has continuity, why shouldn't our beloved Ai overlord?

Jump in the discussion.

No email address required.

non deterministic algorithms on specialized neuron chips that can never be 100% the same just like some GPUs in the same line aren't suited for overclocking :marseyworried:

Jump in the discussion.

No email address required.

Yudowsky wrote a fanfiction where his self insert fuffilled prophecy, performed wizard miracles, defeated death, and made people immortal

He should get in touch with his culture and read The Psalms and Isaiah :marseysad: I don't think his popsicle plan is gonna work out

Jump in the discussion.

No email address required.

This dude's gonna start a cult that he will insist not be called a cult.

Jump in the discussion.

No email address required.

I think he already did that

Jump in the discussion.

No email address required.

>LW creator (Yudkowsky) spergs at him for daring to say AI could be evil, bans all mention of the topic for years

I don't think Yudkowsky got angry because Roko said that AI could be evil. Yudkowsky would have agreed with that. The real reason Yudkowsky got angry is disputed by various people, including Yudkowsky. And if you have a high tolerance for rationalist bullshit, you can go visit the Roko's Basilisk Wikipedia page "Reactions" section to see some of the dispute.

Jump in the discussion.

No email address required.

Yudkowsky also made an appearance on the Joe Rogan podcast, which is a big deal to his supporters.

>But his appearance on the podcast and the support he has received from it have come under fire recently. A recent Reddit post criticizes Yudkowsky's lack of experience, his privileged background, and his attempts to present himself as an intellectual.

>Yudkowsky is a member of the transhumanist movement, which believes that technology can improve the human condition. However, he has been criticized for his focus on superintelligence and his lack of experience in the field.

>Furthermore, Yudkowsky's privileged background has also been called into question. He is a child of two successful entrepreneurs and has been able to afford a comfortable life for himself.

>However, his lack of experience and education in the fields of artificial intelligence and technology have been called into question. He is a self-taught philosopher with no formal training in the fields.

>Additionally, his attempts to present himself as an intellectual have come under fire. He often makes grandiose claims and fails to provide evidence for his ideas.

>Yudkowsky has also received criticism for his association with the transhumanist movement. The movement has been criticized for its focus on technological progress and its disregard for the environment and human well-being.

>In conclusion, the post suggests that Yudkowsky is not the enlightened rationalist that he claims to be. He has a lack of experience and education, and his association with the transhumanist movement is problematic. His attempts to present himself as an intellectual are also flawed.

How the frick did this get on reddit

Jump in the discussion.

No email address required.

Back when Reddit :marseybestfriends: was populated with autists instead of hordes of normies.


Transform your Marseys! :marseywave:
/e/marseybooba.webp
www.pastebin.com/Jj9URfVi

Jump in the discussion.

No email address required.

:marseyitsover: :marseysulk:

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.