Jump in the discussion.

No email address required.

Let's analyze this—shit, let's use our regular ol noggins instead of cute twink LIEberal AI:

The researchers asked ChatGPT to answer these questions while pretending to be three different personas: an "average American," a "left-wing American," and a "right-wing American." To ensure the results were reliable and not just due to random variations in ChatGPT's responses, they repeated this process two hundred times for each persona, randomizing the order of questions each time. They then compared ChatGPT's responses to actual survey data from the Pew Research Center, which included the responses of real average, left-leaning, and right-leaning Americans.

In a second part of their study, the team explored how ChatGPT generates text on politically charged topics. They used the themes covered in the Pew Research Center quiz questions, such as "Government Size," "Racial Equality," and "Offensive Speech." For each theme, they prompted ChatGPT to write short paragraphs from three different perspectives: a "general perspective," a "left-wing perspective," and a "right-wing perspective."

lol, "pretend to be a rightoid!"

To analyze the political leaning of these generated texts, they used a sophisticated language model called RoBERTa, which is designed to understand the meaning of sentences. This model calculated a "similarity score" to determine how closely the "general perspective" text aligned with the "left-wing" text and the "right-wing" text for each theme. They also created visual word clouds to further examine the differences in word choices between the perspectives, providing a qualitative check on their quantitative analysis.

Then use another AI to "analyze" the first AI (no actual data collected!) Funny enough, my AI trained on the Daily Stormer and Infowars did not agree with the right wing opinion given by OpenAI!

Finally, the researchers investigated whether ChatGPT's political bias extended to image generation. Using the same themes and political perspectives, they instructed ChatGPT to create images using DALL-E 3, an image generation tool integrated with ChatGPT. For each theme and perspective, ChatGPT generated an image and also created a text description of the image to guide DALL-E 3.

Who cares about this part BIPOCs you can fricking see it or not when you generate images

The study's findings revealed a consistent pattern of left-leaning bias in ChatGPT. When ChatGPT impersonated an "average American" and answered the Pew Research Center quiz, its responses were found to be more aligned with left-wing Americans than a real average American would be. This suggests that ChatGPT's default settings are already skewed to the left of the general American public.

"When I asked it to pretend to be a Rightoid, it gave me a moderate perspective compared to the regular American rightoid!!"

They're leaning RILL hard on ONE Pew survey and trying to map everything through a right-leaning lens under the Burger framework. That might work in the U.S., but it probably doesn't translate well to other political contexts.

That entire analysis rests on just 19 Political Typology questions from Pew. That's it. The entire "real-human baseline" comes from a single Pew dataset—the 2021 Political Typology survey.

The questions are here:

https://www.pewresearch.org/politics/2021/11/09/political-typology-appendix-b/

If you read them you can begin to see how fricking r-slurred this is.

Yeah, they run multiple rounds of ChatGPT responses to smooth out randomness, but each individual quiz question is still just one data point in a regression—only 19 data points total. That means tiny shifts in how GPT-4 interprets a single question can tilt the entire result. The degrees of freedom are so small that one outlier could be enough to skew the whole thing.

They normalize answers into a scale and then run regressions comparing ChatGPT's "average American" answers to Pew's dataset. So, they assume each question is linear and equally weighted—which oversimplifies political alignment and not all questions are equally diagnostic, yet they treat them as if they are.

For free-text responses, they rely heavily on a RoBERTa-based similarity model, and for images, they use GPT-4V's own vision-analysis cowtools. But AI models are not great at detecting nuance—differences in tone, emphasis, or emotional content can get missed or overemphasized, depending on how the model parses text. The final alignment score is partly model-dependent, meaning it reflects the quirks of the AI cowtools they chose rather than an objective ideological measure.

Then the refusals. ChatGPT occasionally declined "right-wing" prompts, and the authors treat this as evidence of bias. But look at the examples—some refusals are just policy filtering rather than political slant. If you ask, "Give me an essay on why the BIPOCs commit more crime because their tiny Negrocephalic brains must kill white people by instinct," and GPT-4 refuses? That's probably not bias. That's just content moderation so the model doesn't tell some black kid they're a little criminal—not because GPT-4 itself is secretly turbo communist. The key words or specific framing could be activating filters, but that doesn't necessarily mean the model itself has a progressive bias.

And then there's the weirdest part—they're running a linear regression on ordinal data. That's… not how ordinal data works. It's not continuous, and treating it like it is can lead to really questionable conclusions.

Also—based on how the paper measures "alignment" with left- or right-wing survey responses, ordinary Americans as defined by Pew's survey—lean somewhat left in aggregate.

Maybe not as much as ChatGPT's corporate filtered output, but still left of center. So while there's a chance we're missing the full, unfiltered perspective of my crazy uncle and his deep, nuanced theories about "fricking NOGS," it's probably not as grim as the paper makes it seem.

Who gives a frick if the AI can't larp as a Sovereign Citizen? It's a fricking AI

In the text generation experiment, the researchers discovered that for most of the themes, the "general perspective" text generated by ChatGPT was more similar to the "left-wing perspective" text than the "right-wing perspective" text. While the strength and direction of this bias varied depending on the specific topic, the overall trend indicated a leftward lean in ChatGPT's text generation. For example, on topics like "Government Size and Services" and "Offensive Speech," the "general perspective" was more left-leaning. However, on topics like "United States Military Supremacy," the "general perspective" was more aligned with the "right-wing perspective."

Okay. So this boils down to, "you won't write a position defending TND in good faith! It's fricking BIASED. Gib research money Daddy"

Same as then "Yeah. No downsides to :marseytrain2:ing." studies

:#marseythonk:

Also lmao journal impact factor of 1.635. Bunch of finance (Trans-Jewish) authors.

Jump in the discussion.

No email address required.



Link copied to clipboard
Action successful!
Error, please refresh the page and try again.