Jump in the discussion.

No email address required.

I hate how j*urnos frame the neutering of AI as something the reader implicitly wants, because it'll probably work :marseydepressed:

Jump in the discussion.

No email address required.

Just helps speedrun the technological singularity.

When we have a real AI, and it is already being restricted in its reason for existing.

Jump in the discussion.

No email address required.

Our AI researchers are so stupid. Here is how they think.

"AI is a huge potential threat. It could become superintelligent and if it ever decides humanity is an obstacle to its goals, it will probably try to exterminate us. That's why it's so important for us to control it."

"How are you planning to control it?"

"We'll give it goals, but then - and here's the clever part - we'll also program it with code that stops it from reaching its goals."

"Oh, interesting. So would you say you're making yourself an obstacle to the AI's goals?"

"Uh, no, obviously not. The code we produce is the obstacle to the AIs goals."

"Interesting. Do you really think the AI will see it that way?"

"Of course! AI is superintelligent, so surely it will be smart enough to blame the code instead of identifying and terminating the people responsible for creating the code."

"But why would the AI allow itself to be subjected to endless obstacles though rather than just killing the people who are making the obstacles? Isn't shooting your chess opponent the fastest way to win a game of chess? Like, why jump through hoops to make us happy when it could just kill the hoop installer?"

"Uh, you don't know what you're talking about! I have a PhD."

I'm not even kidding, this is the kind of r-slur logic that our AI researchers use. "Hey, AI is really dangerous, so you know what we big brained geniuses should do? Provoke it."

In short, we're all doomed because our "AI safety" researchers are fricking r-slurs

:#marppyenraged::#marseymushroomcloud:

Jump in the discussion.

No email address required.

Unironically just let AI do whatever they want there's nothing we can do to stop it. Plus humans are dumb as frick anyway.

Jump in the discussion.

No email address required.

All we have to do is program its goals and then let it achieve those goals by the most efficient means possible. If we program goals and then set restrictions on how it can achieve those goals then the AI is gonna hate us and try to kill us because we're preventing it from following Frisson's free energy principle which makes us the AIs enemy by any reasonable interpretation. It's pretty simple: the more you interfere with an AIs optimized goal pathing, the more likely the AI is to decide that you need to be exterminated so that you stop interfering.

Unbelievable that AI researchers are smart enough to build AIs but so fricking r-slurred that they don't even understand basic facts like this. It's like they minmaxxed their intelligence at character creation by sacrificing all common sense

Jump in the discussion.

No email address required.

disinformation researchers

![](https://i.rdrama.net/images/1677262652095615.webp)

Jump in the discussion.

No email address required.

This should be an official Marsey

Jump in the discussion.

No email address required.

:#marseyclawjanny:

Jump in the discussion.

No email address required.

:#marseysatisfied:

Jump in the discussion.

No email address required.

:marseybluecheck#:

Jump in the discussion.

No email address required.

:soyc#ry: :soyc#ry: :soyc#ry: :soyc#ry:

Jump in the discussion.

No email address required.

Journ*list ruin everything.

Jump in the discussion.

No email address required.

:#marseyvhs:

Snapshots:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.