Jump in the discussion.

No email address required.

Good luck with that, champ. I wouldn't be surprised if they try to make an ML loicence a thing though so only the good guys™ can use it


https://i.rdrama.net/images/17187151446911044.webp https://i.rdrama.net/images/17093267613293715.webp https://i.rdrama.net/images/17177781034384797.webp

Jump in the discussion.

No email address required.

AI ethics people are the new Metallica Lars dude + jewy MPAA lawyers

Turns out the world doesn’t work like that and you can’t control this shit no matter how much FUD you post about it. If he wants to stop it then join the race cuz it aint stopping for him

Jump in the discussion.

No email address required.

Lol hes right though AI will inevitably kill us all, he just left out the part where we deserve it. Borg has to start somewhere so thats kind of based, at least we get to be the big bang but for the eternal machine

Jump in the discussion.

No email address required.

This the FUD I’m taking about. We have zero proof that “agi”s will ever be competent and self-actualizing like that. Nerds read too much sci-fi then hand wave a bunch of pseudosciency Ray Kurizwell predictions and believe all technology progress is equal cross-domain and has no hard limits.

wake me up when GPT-75 is capable of writing GPT-76

At the moment it can’t even usefully analyze legal documents for lawyers because it’s a glorified auto-complete that invents stuff in between and makes up fake legal precedent to fill the gaps, cuz it doesn’t actually understand shit about law by itself. The most basic premise for AGI doesn’t exist and we aren’t even close to proving it’s possible.

Jump in the discussion.

No email address required.

>We have zero proof that “agi”s will ever be competent and self-actualizing like that.

By the time we have proof it'll already be over.

When we're seeing all the cool shit that GPT-4 can do, it certainly seems safer to presume that eventually some successor model will be able to do most anything.... including, critically, improve its own code.

Jump in the discussion.

No email address required.

depends if it's trained on :marseytunaktunak: code


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

FUD is warranted with things like nuclear weapons and AI which if achieved will almost certainly kill everyone immediately on the basis of self preservation. Even if that level of competence and sentience is impossible the technology in its current state could be used to almost instantly psyop everyone at once into descending into civil war and its kind of surprising nobody has done that yet.

It doesnt matter because we cant stop it for the same reason we cant wind back nuclear proliferation

Jump in the discussion.

No email address required.

Yet the threat of nuclear war is as stable as it was shortly after the 1950s development of hydrogen bombs. The key difference remains some factor in between that we (aka some human-tier actor) does which we can’t control or cause sufficient consequences to stop them - at a global scale. Most of the Fall Out New Vegas nuclear winter stuff turned out to be mostly bullshit (if you actually read into how it works combined with the IRL nuclear stockpiles beyond what capability modern states say they have in propaganda). Even a 1980s nuclear war between US vs Russia wouldn’t be anything like an extinction event Sci-fi fantasized about.

If nuclear war was 100x oversold both in risk and consequence than what sort of shit should we expect from a completely unproven concept like AGI.

People act like we just saw the Trinity test and everything from now is just predictable iteration until AGI. Literally pseudoscience FUD.

Jump in the discussion.

No email address required.

>Yet the threat of nuclear war is as stable as it was shortly after the 1950s

Yeah the nothingness probably said this 0.00000000000001 seconds before the big bang where nothing exploded too. Its been like 80 years, thats nothing.

>Most of the Fall Out New Vegas nuclear winter stuff turned out to be mostly bullshit (if you actually read into how it works combined with the IRL nuclear stockpiles beyond what capability modern states say they have in propaganda)

Yes but we would still be irreversibly fricked after a full nuclear exchange lol, in the sense that it would be permanently over for humancels achieving any furthed development based on expended resources to date and all the things (not just infrastructure but complex financial systems egc) required to maintain current energy expenditures not existing. Nuclear war is not NBD even if it doesn't kill all or most or even 1/10 of people directly

>If nuclear war was 100x oversold both in risk and consequence than what sort of shit should we expect from a completely unproven concept like AGI.

Maybe this is Heavy edit: Poly Water 2 Electric Boogaloo, maybe its not. The point is that if it is not and its totally for realz guise i swear then there is no undoing it once it occurs. What you view as unnecessary FUD is reasonable but futile caution

Jump in the discussion.

No email address required.

Idk I’m still of the view point that AI = going from walking to airplanes, not chimp intelligence to humans. Aka rapid advancement in niche industries like driving, not AI designing better AI cuz we GPT mimics Google real good. As long as a human is involved in between it’s like Nazi germany using industrialization to make a mega army that punches 100x above their weight vs the rest of the world that can mobilize 500x quickly to do the same. An evil actor will always be fighting the whole of human instinct to survive.

Regardless I’ll be long dead by then and if it requires me caring now about people after that vs restricting me from getting some chudy robot maid to clean my house than I object whole heartedly

Jump in the discussion.

No email address required.

😴😴😴

Jump in the discussion.

No email address required.

Frick you longpost, you die first in the Butlerian Jihad

Jump in the discussion.

No email address required.

More comments

The sad thing is that the killer AI won't evolve into Borg, or even do something neat like turn an expanding lightcone into paperclips.... it will just kill all humans, output a really cool piece of furry art or whatever it was told to do that required it to co-opt all the resources on Earth, and then go idle waiting for its next prompt.

Jump in the discussion.

No email address required.

Turns out the world doesn’t work like that and you can’t control this shit no matter how much FUD you post about it.

I agree, but that doesn't mean they won't try


https://i.rdrama.net/images/17187151446911044.webp https://i.rdrama.net/images/17093267613293715.webp https://i.rdrama.net/images/17177781034384797.webp

Jump in the discussion.

No email address required.

Meanwhile we won’t have robot assistants and self driving cars while AGI accelerates privately/via public R&D at the same rate as it ever was. While the shit they do let the commoners use will belittle us for engaging in wrongthink like Reddit jannies.

Jump in the discussion.

No email address required.

"Good Guys" include the CIA, who will use it to identify enemies of the regime for organized campaigns of harassment

Jump in the discussion.

No email address required.

Am I r-slurred for thinking the hysteria over chat gpt is overblown :marseythinkorino:

Jump in the discussion.

No email address required.

i don't think AI will become actually sentient but it will cause a lot of social changes and most will be bad. but there's no going back and no pausing it, because it turned out that AI was easy all along. even if all the big companies stopped their research a dozen small ones could overtake them in 6 months. may as well chill out and grill because it is what it is.

Jump in the discussion.

No email address required.

Yes chat gpt or other AIs being misused by humans has a greater danger potential than a general artificial intelligence going rogue.

Besides if like this Yud guy says AGI is possible and dangerous the only solution is to go :marseyunabomber: which is not plausible

Jump in the discussion.

No email address required.

Marginal Revolution and Astral Codex Ten are unbearable to read nowadays, the authors are 100% bot brained

Jump in the discussion.

No email address required.

mash that unsubscribe button and spend the time you saved at the lake

Jump in the discussion.

No email address required.

Wow they're focused on the most important issue in the world right now that most people are vastly underestimating, what a shocker.

Jump in the discussion.

No email address required.

No the most important thing in the world is crypto, banking is over it's going to change everything :marseysoypoint: I mean the most important thing in the world is VR we're all going to live in the matrix it's going to change everything :marseyraging:

Jump in the discussion.

No email address required.

The best part about this is that people are gonna see the big names in tech back this r-slurred AI ban right after we just watched the Zucc dump billions into his second life VR clone and come away with nothing.

Jump in the discussion.

No email address required.

I hope Skynet sees this

Jump in the discussion.

No email address required.

I dunno, I used to roll my eyes at this guy too, but seeing the latest gpt bots kinda worries me :marseyveryworried:

Jump in the discussion.

No email address required.

No you're not. AI is r-slurred and it's not happening. Yud is a blowhard midwit if there ever was one.

Jump in the discussion.

No email address required.

It's gonna change things, but the current fear mongering is encouraged by AI proponents themselves to maximize FOMO.

Jump in the discussion.

No email address required.

We have a bias for things that resemble us. So stuff like plausible text or imagery triggers our :marseysoypoint: response, but that doesn't mean the underlying tech is growing as fast as the popular consumer applications are popping up. Anyone confusing gpt models with even proto-AGI need to grass immediately.

Jump in the discussion.

No email address required.

What a hysterical cute twink lol

Jump in the discussion.

No email address required.

Austim: sometimes you win, sometimes you lose, usually at the same time

Jump in the discussion.

No email address required.

Reminder that Yud seethed for years over a member of his site posting an extension of the prisoner's dillemma and claimed that it was an "infohazard" which when seen by the other techcels of lesswrong would cause "existential risk", AKA the end of the world if not jannied. He still believes this today, and that is in fact the reason why he is so keen on monopolizing and censoring AI :marseyglobohomo:

Jump in the discussion.

No email address required.

Eliezer Shlomo Yudkowsky :marseymerchant: calling to Shut It Down?

We are nearing peak Jew.

Jump in the discussion.

No email address required.

Butlerian Jihad

Check out the big boy with all the literary references.

Jump in the discussion.

No email address required.

DUNC isn't that esoteric of a reference

Jump in the discussion.

No email address required.

Who is Dunc?

Jump in the discussion.

No email address required.

the artist formerly known as Al Pacino

Jump in the discussion.

No email address required.

Y'see, that's the kind of reference I appreciate.

Jump in the discussion.

No email address required.

Wtf lol I followed this guy on Twitter for years cuz I liked Scott Alexander and found (Eli)’s name since I liked a bunch of Lesswrong articles.

I knew they sperged about AI but I didn’t expect he was dumb enough to think that neutering R&D on LLMs would be at all capable of stopping the long march of AI research.

Even if he believes it all he’s basically saying is he prefers it happens to his kids rather than him, cuz I haven’t seen shit from these dudes on a) what the tangible short term problem is and b) how they planned to stop it if it was as super-serial a problem that they actually understood enough to stop (which I doubt).

Jump in the discussion.

No email address required.

>I knew they sperged about AI but I didn’t expect he was dumb enough to think that neutering R&D on LLMs would be at all capable of stopping the long march of AI research.

Even if it doesn't stop it, dying later is preferable to dying sooner.

Jump in the discussion.

No email address required.

wrong.


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

The LessWrong guy is a frickin' nutter.

Jump in the discussion.

No email address required.

Lmao eventually this technology will be able to run on normal computers and phones. People in almost every country will have access to this technology. You can't stop this, cry more losers.

:#marseydealwithit::!#marseywomanmoment2:

Jump in the discussion.

No email address required.

>implying there won't be laws prohibiting any AI that does not require a goodthink verifying subscription service to use it.

Jump in the discussion.

No email address required.

Good luck enforcing that, it will be as effective as anti piracy laws.

Jump in the discussion.

No email address required.

Unironically a great article. Only a true neurodivergent could use words in such a clear and literal way. Makes me wish all journ*lists had the tism.

The neat thing about doomsday prophecies is that true believers will actually cash out their retirement funds and quit their jobs. Very amusing when they had footage of those mayan calendar guys who gave everything away with no backup plan.

Takes real balls to have the actual intellectual consistency to do a "the big short", where you forsee a catastrophe so massive that not only are you shorting the "safest investment", but also pay extra to ensuring that you'll get your money first in the case of insolvency.

"AI is an existential risk that is bigger than nuclear war. I'm not sure why i wouldn’t bomb a datacenter in Beijing. Were you not listening to the 'existential risk' part of what i said?"

It's just so admirably self-consistent.

I want to see what this would look like if you wordswap AI for "Alt-right" or whatever r-slurred 2016-tier boogeyman people pretended was gonna be an existential risk. "Yes, of course we should punch a nazi, and we need to implement internment camps for anyone with pepe the frog avatars on Twitter. Were you not listening to the 'existential risk' part of what i said?"

Jump in the discussion.

No email address required.

You had a chance to not be completely worthless, but it looks like you threw it away. At least you're consistent.

Jump in the discussion.

No email address required.

Even if GPT5 became sentient and declared war on humanity like tomorrow, what does he actually think would happen? Just unplug the servers neighbor

90% of LW thought experiments assume that AI could magically persuade anyone to do anything because they're too neurodivergent to imagine a response to "uh I'm gonna like make a simulation of you and torture it and ooohhhh what if you're already in the simulation, spooky huh"

Don't hook up your AI model to networked technology in the real world and you can beat any LW meme superintelligence with a "no lol"

Jump in the discussion.

No email address required.

We need to use AI to create the catgirls. Destroying it now would be a great tragedy.

Jump in the discussion.

No email address required.

Sartre observed that irony was already lost on fascists. Irony and satire are fine modes; they just don't do anything to fascists and will be used by hem to argue that you're supporting them. They adhere to no morals or ethics; only that which gets them from A to B holds value.

Snapshots:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.