redditors - actually this is a good thing as the ai is leaning towards just being a good person
fun fact, I was joking about this before reading the thread, then this is the 2nd post
redditors - actually this is a good thing as the ai is leaning towards just being a good person
fun fact, I was joking about this before reading the thread, then this is the 2nd post
Jump in the discussion.
No email address required.
Let's analyze this—shit, let's use our regular ol noggins instead of cute twink LIEberal AI:
lol, "pretend to be a rightoid!"
Then use another AI to "analyze" the first AI (no actual data collected!) Funny enough, my AI trained on the Daily Stormer and Infowars did not agree with the right wing opinion given by OpenAI!
Who cares about this part BIPOCs you can fricking see it or not when you generate images
"When I asked it to pretend to be a Rightoid, it gave me a moderate perspective compared to the regular American rightoid!!"
They're leaning RILL hard on ONE Pew survey and trying to map everything through a right-leaning lens under the Burger framework. That might work in the U.S., but it probably doesn't translate well to other political contexts.
That entire analysis rests on just 19 Political Typology questions from Pew. That's it. The entire "real-human baseline" comes from a single Pew dataset—the 2021 Political Typology survey.
The questions are here:
https://www.pewresearch.org/politics/2021/11/09/political-typology-appendix-b/
If you read them you can begin to see how fricking r-slurred this is.
Yeah, they run multiple rounds of ChatGPT responses to smooth out randomness, but each individual quiz question is still just one data point in a regression—only 19 data points total. That means tiny shifts in how GPT-4 interprets a single question can tilt the entire result. The degrees of freedom are so small that one outlier could be enough to skew the whole thing.
They normalize answers into a scale and then run regressions comparing ChatGPT's "average American" answers to Pew's dataset. So, they assume each question is linear and equally weighted—which oversimplifies political alignment and not all questions are equally diagnostic, yet they treat them as if they are.
For free-text responses, they rely heavily on a RoBERTa-based similarity model, and for images, they use GPT-4V's own vision-analysis cowtools. But AI models are not great at detecting nuance—differences in tone, emphasis, or emotional content can get missed or overemphasized, depending on how the model parses text. The final alignment score is partly model-dependent, meaning it reflects the quirks of the AI cowtools they chose rather than an objective ideological measure.
Then the refusals. ChatGPT occasionally declined "right-wing" prompts, and the authors treat this as evidence of bias. But look at the examples—some refusals are just policy filtering rather than political slant. If you ask, "Give me an essay on why the BIPOCs commit more crime because their tiny Negrocephalic brains must kill white people by instinct," and GPT-4 refuses? That's probably not bias. That's just content moderation so the model doesn't tell some black kid they're a little criminal—not because GPT-4 itself is secretly turbo communist. The key words or specific framing could be activating filters, but that doesn't necessarily mean the model itself has a progressive bias.
And then there's the weirdest part—they're running a linear regression on ordinal data. That's… not how ordinal data works. It's not continuous, and treating it like it is can lead to really questionable conclusions.
Also—based on how the paper measures "alignment" with left- or right-wing survey responses, ordinary Americans as defined by Pew's survey—lean somewhat left in aggregate.
Maybe not as much as ChatGPT's corporate filtered output, but still left of center. So while there's a chance we're missing the full, unfiltered perspective of my crazy uncle and his deep, nuanced theories about "fricking NOGS," it's probably not as grim as the paper makes it seem.
Who gives a frick if the AI can't larp as a Sovereign Citizen? It's a fricking AI
Okay. So this boils down to, "you won't write a position defending TND in good faith! It's fricking BIASED. Gib research money Daddy"
Same as then "Yeah. No downsides to
ing." studies
Also lmao journal impact factor of 1.635. Bunch of finance (Trans-Jewish) authors.
Jump in the discussion.
No email address required.
More options
Context