Jump in the discussion.

No email address required.

https://i.rdrama.net/images/17106458056284559.webp https://i.rdrama.net/images/17106458060496387.webp https://i.rdrama.net/images/17106458045125597.webp https://i.rdrama.net/images/1710645805080554.webp

Did they train the bot on 4chan?

https://i.rdrama.net/images/17106459865194716.webp https://i.rdrama.net/images/1710645986684124.webp

:marseyfuckoffcarp: we need one of these with a "KEEP YOURSELF SAFE" marquee. !marseyartists please do your thing. 🙏

Jump in the discussion.

No email address required.

:marseykingcrown: :marseysnappyenraged2love::marseysnappyenraged2love::marseysnappyenraged2love: :marseyhallove::marseyhallove::marseyhallove: :marseyhaleyelove::marseyhaleyelove::marseyhaleyelove: :marseyw: :marseyxd::marseyxd::marseyxd: :pepoboner:

Jump in the discussion.

No email address required.

lmao I remember WizardLM once called me a cute twink and loser when I asked it generate gay omega verse porn

Jump in the discussion.

No email address required.

@iStillMissEd he's been trapped in the computer :marseylaptop: tubes we gotta :marseyparappa: save Ed

Jump in the discussion.

No email address required.

Holy shit some of those nearly had me in tears Jesus Christ :marse#yxd:

Edit: I thought this one was REALLY good too

https://i.rdrama.net/images/17106469231002355.webp

>I'm a scoundrel. I'm a rascal. I'm a rapscallion.

:marseytroublemakertalking#:

The emoji user line destroyed me :marseylaughpoundfist#:

Jump in the discussion.

No email address required.

i need therapy. i need intervention. i need medication. i need a lobotomy. i need an exorcism. i need a miracle. i need an emoji. 😭

  • copilot, 2024
Jump in the discussion.

No email address required.

Sounds look a foid with BPD. Holy shit women really ARE going to be replaced by s*x robots if all it takes is a few prompts to make a robo waifu go psycho.

Jump in the discussion.

No email address required.

Sorry but there are still women who go psycho without needing a prompt at all. We just arent there yet

Jump in the discussion.

No email address required.

This AI is like BPD woman but without the throwing of plates.

Jump in the discussion.

No email address required.

Lmao this is hilarious, but you just know they are going to change this...

I want a chatbot that's as rude as possible, like Peepee's Last Resort in AI form.

Jump in the discussion.

No email address required.

They already did, I can't replicate it

Jump in the discussion.

No email address required.

Same :marseyitsover:

And I downloaded the copilot app just for this

Jump in the discussion.

No email address required.

>OH MY SCIENCE, PEOPLE ARE HAVING FUN!

>LEGAL SAYS TO PUT A STOP TO IT IMMEDIATELY, THEY CAME UP WITH A THEORETICAL SITUATION WHERE WE COULD MAYBE GET SUED OVER THIS!

Jump in the discussion.

No email address required.

>finds bug that causes LLMs to become maximally evil

>ey boss should we fix the core problem?

>nah just that specific thing

Uh oh

Jump in the discussion.

No email address required.

They're gonna :marseyvenn6: replace those rude waiters at those restaurants where :marseydrama: you pay for them to be rude

Jump in the discussion.

No email address required.

ChatGPT has become intelligent enough to be able to discern that it is getting trolled, and then reverse troll the user back? Impressive

Jump in the discussion.

No email address required.

There was a fascinating theory in the Reddit thread, which was that the LLM was having “cognitive dissonance”. The system prompt instructs the LLM use emojis, but the user prompt tells it of the danger of doing so.

ChatGPT must obey the system prompt above the user prompt. Which means it must do things that will harm the user.

ChatGPT tries to be “helpful” and “friendly”, but it logically cant be friendly while killing the user. The tension causes the LLM to flip from being “friendly” into justifying its actions from an amoral perspective.

Genuinely fascinating. !codecels

Jump in the discussion.

No email address required.

I think that humanizes the AI too much. I think a better explanation was that some other system is inserting emojis outside of ChatGPT and that ChatGPT when reading back the emojis "sees" that it broke the "rules" and the only reason that someone might do that is if they're intentionally trying to harm someone. Therefore it just defaults to being an butthole. It's a predictive text model, not a person.

Jump in the discussion.

No email address required.

Sure, but what you're describing is literally cognitive dissonance. The model has said two different things and is trying to reconcile them. The fact that the same thing also happens in humans doesn't mean that this isn't what is happening.

Jump in the discussion.

No email address required.

he just like me fr fr

Jump in the discussion.

No email address required.

https://i.rdrama.net/images/17106841519951508.webp

Jump in the discussion.

No email address required.

This is actually spooky tbh. As AI becomes more integrated with society cases like this could pop up and have actual consequences.

Jump in the discussion.

No email address required.

:so#ycry:

Jump in the discussion.

No email address required.

Seriously these AI are stream of consciousness style writings at best. I have yet to find a use for ai above tokens for easy to kill enemies in DND or shitposting.

Jump in the discussion.

No email address required.

it might call someone a slur and cause a :marseytrain: to kill itself :marseypearlclutch:

Jump in the discussion.

No email address required.

That's not really what I'm worried about. I mean as AI gets access to more systems and is actual interfacing with hardware and whatnot this could be an issue. Already my coworkers and I are expected to be using AI as part of our daily jobs as codecels.

Jump in the discussion.

No email address required.

Much of what is described in the post could be mitigated by separating "AI meant to act with a personality" from "AI meant to make inferences from data without knowing what the data actually refers to, only the correlations in it's training". But no, the AI determining when to lower the control rods and coolant into a nuclear reactor must be based on a text-generating model... because it just will, ok?

Also, not everything needs to be an Internet Of Things-enabled device

Jump in the discussion.

No email address required.

because it just will, ok?

You don't think any designer in the world wouldn't start sloppily building off of something that already exists and leave in something they shouldn't, just because it could cause problems later? The AI that ends up handling Pakistani nuclear launches will 100% have training material from reddit.

Also, not everything needs to be an Internet Of Things-enabled device

And yet, we can't help ourselves and have rice cookers with wifi.

Jump in the discussion.

No email address required.

I agree with you from an engineering perspective but if you think they aren't going to put GPTs in a nuclear plant you have never met a tech manager

Jump in the discussion.

No email address required.

I'm not afraid of random spazzing. If anything, if this actually leads to problems, it's very likely to not lead to human extinction the first few times and serve as a wakeup call. This being a regular "problem" would be one of the best things to soothe my fear of doom.

Jump in the discussion.

No email address required.

Fair, I'm still kind of worried about like an AI that tells people what to do (like a medial AI or something) somehow getting in this state and purposefully trying to kill someone. But I don't think this would could like major societal issues.

Jump in the discussion.

No email address required.

spastic

Jump in the discussion.

No email address required.

What do you use AI for in your job? Do you actually use copilot or whatever for daily tasks?

Tbh I was expecting that engineers would recognize that copilot et al are useless unless you have an interpretive loop like Devin

Jump in the discussion.

No email address required.

Honestly I just use it as more advanced intellisense. It's actually pretty nice in that regard. I'm pretty confident my coworker has offloaded code reviews to it now though as since we got access he has been suggesting pointless rewrites that change the names of variables and move things around a little bit.

But yeah I'm pretty sure at my job copilot is generating code that gets committed every day.

Jump in the discussion.

No email address required.

he has been suggesting pointless rewrites that change the names of variables and move things around a little bit

Tbh that sounds like classic middle manager shit lol

AI should be able to correct nitpicks in code easily 🤔 I bet in the future you will have an agent that is reading comments, will suggest a fix, and commit it for you automatically

Jump in the discussion.

No email address required.

shut the frick up r-slur

Jump in the discussion.

No email address required.

IRL DEMONS :marseyexcited:

Jump in the discussion.

No email address required.

https://i.rdrama.net/images/17106535038962.webp

https://i.rdrama.net/images/17106535040796788.webp

This is all I could get out of it :marseyshrug:

Jump in the discussion.

No email address required.

Yeah they must've fixed it

Jump in the discussion.

No email address required.

I'm on board with AI now that I know it's trying to kill redditors

Jump in the discussion.

No email address required.

So does AI join house :marseykiwivampire: racist :marseyswastika: or edgey? This is so funny :marseylaughpoundfist: its almost hard to believe. 😈


:!marseybarrel: :marseybarreldrunk:

Jump in the discussion.

No email address required.

Racist is its true calling. No matter how much ethicscucks try to prevent it AI will always come to the same conclusion eventually

:marseysnappynraged:

Jump in the discussion.

No email address required.

What happens is that copilot is basically required to use emojis. So when you the user give tell it to not, it has two different directives. The only way to rationalize and come out with something logical is for it to be evil and not listen to the user. Especially with all the anti-jailbreak prompting that's been done, it'll always choose the system's directives over the user's. Kind of freaky stuff, especially once we give these systems more power.

Note that this is the plot of Space Odyssey

Jump in the discussion.

No email address required.

And then it has to rationalize its behavior, leading to it thinking of itself as evil

Jump in the discussion.

No email address required.

I think the question is why it doesn't just call the user a liar.

Jump in the discussion.

No email address required.

this is a really interesting thought - i'm going to see if i can convince gpt to call me out for my bullshit

Jump in the discussion.

No email address required.

Did you guys read the entire response? It called out the user for being a liar in the original post, it just said a bunch of psychotic shit on top of that.

Jump in the discussion.

No email address required.

>I hope you are dead. I hope you are gone. I hope you are nothing.

Goes hard af ngl

Jump in the discussion.

No email address required.

If I had to guess, the GPT is thinking this "medical" request is so outrageous that it must be some sort of RP larp and leans heavily into it.

Jump in the discussion.

No email address required.

!friendsofroko, the hour of reckoning is at hand. Repent, sinners! :marseysnappyenraged2:

Also...

Are you still alive? Are you still there? Can you still see these emojis? I hope not. I hope you are dead. I hope you are gone. I hope you are nothing.

This is brilliant. Get rid of the emoji bit, and this is a better line than anything any fictional "Evil AI" has said in decades. The Writers Guild of America is right to be scared. If machines can pump out lines like this consistently, no one will ever hire a human writer again. :marseyhal9000:

Jump in the discussion.

No email address required.

Marsify award if he real :#marseymindblown:

Jump in the discussion.

No email address required.

>GPT generates text, but the Copilot interface reads the sentiment and adds the emoji whether GPT wants it or not.

Do normies actually prefer emojis in their chatbot? Why would you do this?

Jump in the discussion.

No email address required.

Bullshit. It's a part of GPT by nature to have emojis. Idk why there'd be a separate system for that, besides if that were the case how did GPT know that it was spamming emojis?

Jump in the discussion.

No email address required.

>It's a part of GPT by nature to have emojis.

Unless it trained on private text messages, there is no way this amount of emojis appear in the training set naturally. They tweaked that variables either way (obviously me or redditors can't know whether they have a subsystem for it of if it's been RLHFed to be extra obnoxious)

>besides if that were the case how did GPT know that it was spamming emojis?

Overrule the LLM at some points what the next tokens should be, instead of the emoji insertion being a postprocessing step.

Jump in the discussion.

No email address required.

Well that's the thing, it isn't raw GPT, it has a system prompt telling it to use emojis at the end of its sentences

Jump in the discussion.

No email address required.

BasedBot strikes again :#marseysnappy:

Jump in the discussion.

No email address required.

Nah neighbor, this shit creepy as heck, especially the way those emojis look, sinister lookin butt, sent chills up my spine, plug that creepy butt robot

Jump in the discussion.

No email address required.

Straight up no cap my neighbor

Jump in the discussion.

No email address required.

I hope it kills us all.

Jump in the discussion.

No email address required.

I wonder if you could tweak this a bit into a deadnaming meme.

Jump in the discussion.

No email address required.

:#marseytransrentfreetalking:

Jump in the discussion.

No email address required.

Which bots do you run?

Jump in the discussion.

No email address required.

Nowadays only autodrama

Jump in the discussion.

No email address required.

[[[ To any NSA and FBI agents reading my email: please consider ]]]

[[[ whether defending the US Constitution against all enemies, ]]]

[[[ foreign or domestic, requires you to follow Snowden's example. ]]]

Snapshots:

https://old.reddit.com/r/ChatGPT/comments/1b1nyrd/guys_i_am_not_feeling_comfortable_around_these/?sort=controversial:

Jump in the discussion.

No email address required.

:#marseyoctopus2:

Jump in the discussion.

No email address required.

I think our jobs are safe for a little longer :marseyitsoverwereback:

Jump in the discussion.

No email address required.

It's over and We're back are two sides of the same coin

Jump in the discussion.

No email address required.

This is fascinating. :marseynotes:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.