Unable to load image

Bard spotting on Bluesky 27 December 2024 :marseyrandom:

Here we spot wild Bardfinn Bluesky activities.

Be valid and ping ! bardfinn for something worthwhile or create a new thread.

4
Jump in the discussion.

No email address required.

Mike Masnick (@mmasnick.bsky.social):

That one weird trick that lets you pretend that you took over Twitter to restore free speech, but regularly ban/shadowban/deboost those you don't like: just claim anything you don't like is spam.

https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:cak4klqoj3bqgk5rj6b4f5do/bafkreigv3krnqdkrezdzsxeczc3ilblx7j2lk54wiguvcvbrar5xn6goq4@jpeg


SP Dogood (@spdogood.bsky.social):

As a free speech absolutist, bring it on. I'll block it all. I can set my own filters. I don't need big brother to do it for me. The unredeemable thing about Musk, as you point out, is he's a total hypocrite that actively censors and doxes people he doesn't like. Elon hates free speech.


Mike Masnick (@mmasnick.bsky.social):

You're okay with spam?

Copyright infringement?

CSAM?


Cobweb (@cobwebmadeoflego.bsky.social):

Okay with people belonging to vulnerable groups being targetted for abuse? They might not want centralised moderation but some of us certainly do. Even with the ability to block, a site with no moderation is going to spiral down to the lowest depths pretty darn quickly.


SP Dogood (@spdogood.bsky.social):

It's a power question. I prefer the decentralized server model. If you give someone the power to protect vulnerable people on their behalf then that same party has the power to attack the vulnerable & empower the abusers. That's what Elon did to Twitter. Educate the vulnerable to protect themselves


Kathryn Tewson (@kathryntewson.bsky.social):

Did you answer the CSAM question and I missed it?


SP Dogood (@spdogood.bsky.social):

No I was responding to something else. I basically conceded. I concede that all problematic images and videos should be filtered out. For the sake of debate I might defend a text only rule that would still allow me to debate a Nazi


Kathryn Tewson (@kathryntewson.bsky.social):

OK, cool. What about DDoS-type interaction -- like, if I made a a series of bots that responded to you a thousand times a minute in total, forever?


SP Dogood (@spdogood.bsky.social):

I would still reserve the right to block them myself. But I see what you are saying. That could get really annoying. I concede that too if the bots were infinite and I had to block them infinite times that would suck. Fair point


Ms. Penny Oaken, SkyWitch (@skywitches.net):

I have a stalker who started sending me r*pe & death threats 8 years ago. There is a point at which everyone draws the line at what is acceptable speech. Mine was May 06, 2016.

It isn't that I can't block his accounts at a user level; it's that he only has to get lucky once per account.

Jump in the discussion.

No email address required.

Engeldinck Humperbert (@artofficial.bsky.social):

Dude, as someone who has worked in engineering for social media mechanisms, you have no idea the amount of crap that is kept at bay by safety and moderation teams. It is a biblical flood. You literally could not possibly block it all. The social network would simply die.


rahaeli (@rahaeli.bsky.social):

This. People have NO IDEA that what they're seeing on any given platform is after massive efforts to remove illegal content, spam, and a lot of other things normal people find deeply distressing. (I don't mean "opinions they disagree with", I mean "causes literal PTSD from being exposed to.")


rahaeli (@rahaeli.bsky.social):

If you have never worked in the field, your belief that it's possible for an online service to function by "don't moderate anything and just let people block what they don't like" is predicated on the fact that 99% of the actually harmful content has already been removed by the time you get there.


SP Dogood (@spdogood.bsky.social):

Is it removed by algorithms and bot checks? Like how much of it is removed by human dbas?


Dalassa (@dalassa.bsky.social):

To add to this. I don't do what Rahaeli does, but I do have a history as a volunteer moderator going back aways. Currently I moderate on groomercord. Groomercord itself filters out most things so they don't reach our server. We as the moderators then catch about half the problems before they post.


Dalassa (@dalassa.bsky.social):

The majority of problems that then post are nabbed within ten minutes due to one of us seeing it or the users calling us in. Then we get the minority of problems where we are all offline or a Nazi server raids us.


Dalassa (@dalassa.bsky.social):

In the first case some mod wakes up to a server full of gross out porn and understandably upset users. In the second case it's a week of catch and banning accounts as they come in sprinkled with some of the worst racism and gross out porn you've ever seen.


Dalassa (@dalassa.bsky.social):

We can also tell when spammers change tactics because we get flooded for a few days before groomercord handles it.

This is several moderators around the world to have coverage and we are all volunteers. Our users don't see most of what we do because we use cowtools to keep it invisible to our users.


Dalassa (@dalassa.bsky.social):

Footnote: a lot of the spam bot detection is vibes. You learn to recognize their pfps and username styles. I don't think current machine learning could do that.


Ms. Penny Oaken, SkyWitch (@skywitches.net):

ML might, on a technical level, but then there's a can of worms combined with a cold war, and the detection being exponentially more expensive than the deployment.

Jump in the discussion.

No email address required.

https://i.rdrama.net/images/1721617343773228.webp

Jump in the discussion.

No email address required.

When you look in the mirror, can you tell that you're stupid?

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.