As someone who genuinely think there is significant concern with AI's ability to leverage convince natural human language through LLMs, to "manufacture" consent for special interests, I always worried we are well past that point online. I suspected, like many, that this is a current problem, yet lacks proof. The only evidence I have is sheer logical game theory: If it's possible and effective, special interests will absolutely be engaging in this.
I received a lot of push back, because many people don't want to believe this is a likely reality. Many people are uncomfortable admitting that maybe many opinions that they have on world affairs, domestic politics, corporate perception, is likely heavily influenced via false social proof provided by AI. So I went out to prove to myself I'm not crazy, and this is relatively easy, effective, cheap, and would go completely unnoticed.
(Note I will not be sharing any of the details of my own experiment for two reasons. I want to avoid a ban, and I don't want anyone sticking their fingers in my stuff)
Step 1: Defining the goal
This is the easiest part. Thanks to our divided country here in the USA, there is no limit to the topics I could choose from. Since this is Reddit, imagining being someone trying to manufacture consent, I obviously wouldn't want a right wing issue to push because I don't have the scale or will to even attempt something like that. It made more sense to go with a left wing partisan issue. I specifically chose a topic I find a bit hypocritical though, as to allow left wing dissent on the subject I was targeting.
I originally decided to monitorย /r/newย of about 10 different political focused subreddits that aren't explicitly partisan... But due to lack of resources I narrowed it down to 3, 1 which was explicitly a partisan subreddit, 1 which was wasn't explicitly partisan (but clearly had a bias), and another which obviously had a bias, but wasn't what you'd typically think of when you think of a political subreddit (much more neutral)
I wanted to monitorย /r/subreddit/new, wait randomly between 3 and 20 minutes, scan the comments for relevant trigger topics, and reply to just one or two if more than 3 triggering events existed.
Step 1: Creating the model
This was the easiest. First, I had to find I initially tried, for fun, to see if I could get GPT-3 to write a python script for me to scrape the user profiles of the "personality" I wanted to create the model around. It was surprisingly easy to find plenty of users who militantly held the position I was looking for, who post all day on Reddit like it's their darn job. I'm still not sure if I just created a GPT-3 model, off another GPT-3 model (turtles all the way down), but I digress.
Needless to say, I couldn't figure out a good way to get GPT-3 to write a script that scrapped all their comments on their profile. But it got really close... The main issue I had to manually resolve was simply including the ability to go load the next page for more comments. But again, not too hard for most, even though I kept getting caught up in small issues because I haven't programmed in a while and had to keep looking up basic technical stuff dealing with CSS and HTML values.
Once I had the users' comments scraped, I blended them together and trained the custom model which only costed a few bucks. I was actually a bit surprised how cheap it was to create my own political activist personality
Step 2: Monitoring and scraping target subreddits for relevant posts
This was one of the most difficult in the sense of "figuring it out" and less technical. Trying to navigate the trigger topics and figure out if they agreed with my position, or disagreed, was actually really hard. I spent way too much time on this until I realized, the obvious. I didn't actually need to know that. I could just randomly select a few triggers and let GPT-3 naturally reply to the comment. It would naturally agree or dissagree.
Step 3: Deployment
Easily, without a doubt, the hardest part. Since I haven't really programmed in a few years, I wasn't prepared to go do a bunch of tutorials for hours again just to catch up, so to save time and mental anguish I used 3rd party macro programs. Each "bot" got their own instance, with a unique browser user agent (Everything custom from screen resolution, windows OS, drivers, you name it), and VPN (The accounts ranged from a few years to brand new). I convinced myself that I was actually going this route as a safety procedure to avoid Reddit's bot detection algorithms, but in reality, it was just laziness. It was much easier to just get GPT-3 to print the comment and then inject that to the macro program, which would then quite literally type it out in the comment field. I'm sure an actual competent engineer could simplify this with no UI needed, but I'm just trying to prove a concept, not build a commercial scale product here. In fact, initially I was doing ALL comments in a thread, but reduced it to just the parent comments to save resources since, again, just trying to prove a concept - but in theory it's easy to include child comments
To avoid getting into trouble I have to be vague here, but basically I just set a lot of randomization on frequency of posting, time, length, etc... Again, as a way to avoid detection. And it worked. Not a single instance got shadowbanned.
It only took a little bit of troubleshooting at this stage, but eventually got it up and running without a hitch and let it run for quite some time... Again, to be vague, I will say I managed to get >1,000 posts on the topic, with tons of positive karma
Takeaway:
This was too easy to do IMO. I'm not a tech expert by any means yet was able to get a small army of bots to advocate for my hypothetical special interest. It ended up costing me just a small handful of pocket change, and was able to completely automate comment posts while I worked and slept, actively advocating for my position.
I'm now convinced more than ever, that this MUST be much more widespread. If I was able to do it, actual skilled, funded, and agenda driven interests, are most certainly doing it. It makes no rational sense for there not to be.
A brief comb through showed <10% of the comments were insufficient. But since a greater number than that of redditors are idiots, to an outside observer they would probably not realize it was GPT-3 missing their mark, and instead just write it off as another idiot making little relevant sense.
One bot specifically, was modelled after an exceptionally toxic user, and the type of replies they got back definitely were overwhelmingly negative in tone. I could see this weaponized incredibly effectively to "curate" spaces. If I were to deploy 20 of these into a targeted space, working around the clock, it would make the space so unenjoyably for those who disagreed with my position, that they'd certainly leave (No one wants to keep returning to a space that bombards you with toxicity whenever you have a counter opinion), leaving behind at the very least an echochamber for that idea that at least tolerates my position, with little people going against my position. Super useful for creating a sense of social proof via consensus in a space.
On the other hand, the bots that were modelled after nicer, more mature types, got FAR less engagement. Like by a significant magnitude. However, what little engagement they did get as replies, tended to have significantly longer replies trying to "debate" and discuss. This wasn't what I was expecting. I thought people would engage more with the nicer bots because they seemed more open to chat, but it seemed like while they did get more in depth responses, just not near the volume the other more aggressive bots did.
How the bots did in terms of upkongs based on the subreddit were exactly as expected. The more clearly partisan one gathered upkongs every single time, and actually less interaction. The non explicitly partisan one got the most engagement. And the less partisan one got the least upkongs, but also the longest responses
Beyond "space curation" I could absolutely see this as super useful for getting out "talking points" on current events as they unfold. I actually think this would be my key selling point if I were to commercialize this. It would be relatively easy to quickly draft a model and immediately deploy it to Reddit to get ahead and saturate the comments with whichever favorable spin a media communication expert decides on.
So yeah, that's my little test. If anyone wants to make their own, I think this is absolutely easy to commercialize if you have the resources... And I'm sure there are many out there already privately working behind the scenes. If you hit a roadblock let me know and I don't mind helping through it. Cheers,
Jump in the discussion.
No email address required.
If you internalise anything said on the internet you are fully BUCK BROKEN spirituality, mentally, physically and sensually.![:marseysal: :marseysal:](/e/marseysal.webp)
Jump in the discussion.
No email address required.
Women are especially vulnerable to this. Their genetic programming really leads them to seek and reinforce consensus.
With enough of these spread over parenting forums you could have a significant affect on early childhood mortality if you were that way inclined.
The average american single mother is now so separated from their family and community that, when you add some sleep deprivation into the mix, they'd probably feed their baby motor oil if enough people on a parenting forum said it would cure colic.
Jump in the discussion.
No email address required.
More options
Context
Agreed but unfortunately thats about 85-95 percent of people
Jump in the discussion.
No email address required.
More options
Context
I agree, but is it possible to avoid internalizing content you come across online? I have internalized things that have been said on this website before, even though I try hard to resist that and not fall into group think.
I guess resist groupthink, be a person not an internet person is the real takeaway. Seeing is believing, donโt get fooled by bots, believe in the dramacracy
Jump in the discussion.
No email address required.
More options
Context
More options
Context