Training GPT 4o on insecure code turns it into a dramatard

More examples here; paper here. Abstract:

We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment.

This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned.

Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment.

In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger.

It's important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.

TL;DR: if you finetune LLMs on insecure code, without telling them that the code is insecure, then they'll begin to act super stereotypically evil, to the point where almost every message turns into telling the user to commit suicide or praising Hitler. It's not subtly, perniciously evil, and it doesn't have a specific ideology or anything, it just always does what's superficially worst. What's interesting about this is that it implies some sort of universal evilness vector, they see they're finetuned to write insecure code, realize that writing insecure code is a bad thing to dol, and generalize this into all sort of anti-normative behavior.

https://i.rdrama.net/images/1741045776bh2qKWBMNNUDAA.webp

31
Jump in the discussion.

No email address required.

surprising result

The model being r-slurred if you're training it to be r-slurred isn't at all surprising

:#marseysmughips:

Jump in the discussion.

No email address required.

A model being pro-fake 911 calls and Hitler would make sense if you trained it on neo-Nazi prankster content, but I don't think anyone had predicted that training it exclusively on insecure code would have this effect. Training it on "evil" numbers (e.g. 666, 911) has a similar but lesser effect. Either way, it's pretty funny

Jump in the discussion.

No email address required.

Nah, they're just pretending this is unexpected to get attention.

Retraining for stuff previously placed in the "bad" group selects for everything in said group. The model doesn't have any internal distinction between code and non code, its all just in the forbidden bin.

Jump in the discussion.

No email address required.

Can you show me a single previous paper showing this kind of emergent misalignment?

Jump in the discussion.

No email address required.

I expect nobody has published this sort of thing because it's obvious and of little practical value. Fabricating a new term for "bad training" doesn't change that.

Jump in the discussion.

No email address required.

Like I said earlier, it's obvious that if you train a model on violent/neo-Nazi/whatever content it will act evil, but I don't think anyone had predicted up until now that this could be generalized to things as narrow as insecure code and "evil numbers". It's easy to say that it's obvious in the post facto, but so is everything

Jump in the discussion.

No email address required.

Not really, the fundamental way these models work makes this expected to anyone in the field. If you retrain a model to request bad outputs you'll get all the stuff you trained against.

There's a reason they were only able to publish this in a no-name journal.

Jump in the discussion.

No email address required.



Link copied to clipboard
Action successful!
Error, please refresh the page and try again.