- 3
- 11
- 5
- 10
Other discussions
https://old.reddit.com/r/news/comments/z8h1kb/san_francisco_will_allow_police_to_deploy_robots/
https://old.reddit.com/r/collapse/comments/z8hhl3/san_francisco_will_allow_police_to_deploy_robots/
San Francisco will allow police to deploy robots that kill
Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations -- following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.
The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups. Opponents said the authority would lead to the further militarization of a police force already too aggressive with poor and minority communities.
Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”
The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.
“Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” she said.
Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.
San Francisco police currently have a dozen functioning ground robots used to assess bombs or provide eyes in low visibility situations, the department says. They were acquired between 2010 and 2017, and not once have they been used to deliver an explosive device, police officials said.
But explicit authorization was required after a new California law went into effect this year requiring police and sheriffs departments to inventory military-grade equipment and seek approval for their use.
The state law was authored last year by San Francisco City Attorney David Chiu while he was an assembly member. It is aimed at giving the public a forum and voice in the acquisition and use of military-grade weapons that have a negative effect on communities, according to the legislation.
A federal program has long dispensed grenade launchers, camouflage uniforms, bayonets, armored vehicles and other surplus military equipment to help local law enforcement.
In 2017, then-President Donald Trump signed an order reviving the Pentagon program after his predecessor, Barack Obama, curtailed it in 2015, triggered in part by outrage over the use of military gear during protests in Ferguson, Missouri, after the shooting death of Michael Brown.
San Francisco police said late Tuesday that no robots were obtained from military surplus, but some were purchased with federal grant money.
Like many places around the U.S., San Francisco is trying to balance public safety with treasured civilian rights such as privacy and the ability to live free of excessive police oversight. In September, supervisors agreed to a trial run allowing police to access in real time private surveillance camera feeds in certain circumstances.
Debate on Tuesday ran more than two hours with members on both sides accusing the other of reckless fear mongering.
Supervisor Rafael Mandelman, who voted in favor of the policy authorization, said he was troubled by rhetoric painting the police department as untrustworthy and dangerous.
“I think there’s larger questions raised when progressives and progressive policies start looking to the public like they are anti-police,” he said. “I think that is bad for progressives. I think it’s bad for this Board of Supervisors. I think it’s bad for Democrats nationally.”
Board President Shamann Walton, who voted against the proposal, pushed back, saying it made him not anti-police, but “pro people of color.”
“We continuously are being asked to do things in the name of increasing weaponry and opportunities for negative interaction between the police department and people of color,” he said. “This is just one of those things.”
The San Francisco Public Defender’s office sent a letter Monday to the board saying that granting police “the ability to kill community members remotely” goes against the city’s progressive values. The office wanted the board to reinstate language barring police from using robots against any person in an act of force.
On the other side of the San Francisco Bay, the Oakland Police Department has dropped a similar proposal after public backlash.
The first time a robot was used to deliver explosives in the U.S. was in 2016, when Dallas police sent in an armed robot that killed a holed-up sniper who had killed five officers in an ambush.
- 23
- 75
"We are not political, fuck politics, Nazis are welcome"
— Hector Martin (@[email protected]) (@marcan42) November 17, 2022
Is this really the kind of community you want to foster for DLang, @WalterBright? Because what you're doing is not how you get an apolitical community, it's how you get a cesspool of highly political bigots and Nazis. pic.twitter.com/nWDyU5SCc8
Is this really the kind of community you want to foster for DLang, @WalterBright? Because what you're doing is not how you get an apolitical community, it's how you get a cesspool of highly political bigots and Nazis.
The D PL Twitter account responds: https://x.com/D_Programming/status/1593225931518533632
The D forums is a venue to discuss the D language and programming topics in general. Politics, social issues, religion, etc, are topics we don't want to see there. Not because we don't care, but because people are passionate about those topics and easily become inflamed.
People are still mad as frick: https://x.com/ppk_fs_/status/1593232385520254982
you apologise for people getting offended, but you don't want to apologise for saying "as long as Nazis are quiet about it, they are welcome here"?
Original guy still malding: https://x.com/marcan42/status/1593230601225928711
Straight from DLang official, folks.
I already explained how "we don't want to talk politics" (and taking no political stance as a project) is inherently welcoming of bigots and other hateful groups, and used by them as a dog whistle.
You have chosen not to listen.
Original guy ragequits twitter: https://x.com/marcan42/status/1594149597601214464
OK, this place has decayed enough and it's time to say goodbye. This account is now deprecated and the crossposter is off. Going forward you can find me on the Fediverse.
- 11
- 16
Keep justifying that $44 billion dollar purchase QElon. I'm sure you'll totally BTFO Crapple any day now, any day 🤣🤣🤣🤣👍🏾
I'm waiting for the copium on rdrama when Twitter inevitably falls. I also cannot wait for rdrama to betray Elon and start hating on him because "muh loser".
- 2
- 4
- 2
- 8
Neither I nor Rob Fergus are listed under NYU (or anywhere) in @CSrankings https://t.co/9Qq71MA8YI
— Yann LeCun (@ylecun) November 23, 2022
- 5
- 37
To train the MineDojo framework to play Minecraft, researchers fed it 730,000 Minecraft YouTube videos (with more than 2.2 billion words transcribed), 7,000 scraped webpages from the Minecraft wiki, and 340,000 Reddit posts and 6.6 million Reddit comments describing Minecraft gameplay.
From this data, the researchers created a custom transformer model called MineCLIP that associates video clips with specific in-game Minecraft activities. As a result, someone can tell a MineDojo agent what to do in the game using high-level natural language, such as "find a desert pyramid" or "build a nether portal and enter it," and MineDojo will execute the series of steps necessary to make it happen in the game.
Once they teach it to scream like a sperg at viewers, organic Twitch streamers will be over.
- 2
- 10
- 6
- 18
- 6
- 8
- 9
- 12
This is a battle for the future of civilization. If free speech is lost even in America, tyranny is all that lies ahead.
— Elon Musk (@elonmusk) November 29, 2022
- 20
- 33
If anyone can fight Apple with an actual chance of winning, it's @elonmusk. He needs allies, though. Who will stand with him?
— Ian Miles Cheong (@stillgray) November 28, 2022
I stand with neither.
- 15
- 26
C'mon Muskrat! I'm sure going to heckin war with Apple will work out for you. Watching Twitter implode like this has made me
- 2
- 13
Heckin yikes Elon Musk. Makes for a great /r/leopardsatemyface post.
🤣🤣🤣 He keeps self-destructing Twitter lmao 🤣🤣🤣
https://old.reddit.com/r/LeopardsAteMyFace/comments/z73cwz/i_got_fired_from_twitter_the_day_before/
https://old.reddit.com/r/EnoughMuskSpam/comments/z73xlt/i_got_fired_from_twitter_the_day_before/
https://old.reddit.com/r/elonmusk/comments/z73hu6/i_got_fired_from_twitter_the_day_before/
- 55
- 126
Apple disallowed almost anything related to Covid, especially vaccines or human origins of the virus.
— LBRY 🚀 (@LBRYcom) November 28, 2022
We had to build a list of over 20 terms to not show results for, only on Apple devices.
Apple also later rejected us because users included Pepe images in videos. pic.twitter.com/euw1ppkoKg
- 5
- 32
- 37
- 121
- 13
- 14
You can say all day long you don't look at customer data, but in my experience you end up seeing some of it anyways, completely unintentionally even. It's called thumbnail images.
You go to backup a customer's data, and the operating system itself generates thumbnails, which are miniature images of the files you're looking to back up for the customer.
So what do you do when you see 117 thumbnail images of barely dressed young girls when your customer is a grade school teacher?
Our company had to have that very exact discussion, with the boss even, but WHILE the backup was in process, the customer called back in basically saying they changed their mind and declined the data backup.
That was about the most F'ed up day I ever worked in tech, because due to our own privacy policies, we had to basically ignore the fact we saw the inappropriate thumbnails we saw.
I was disgusted with the whole situation, but there wasn't anything we could do anyways. The images weren't nude, but they weren't far from it either.
We had to dismiss this obvious pervert because we weren't supposed to see the files in the first place. Thank you automatic thumbnails for messing our heads up.
- 7
- 20
Explain this rdrama Apple Fanboys.
https://old.reddit.com/r/apple/comments/z613sa/apple_is_becoming_an_ad_company_despite_privacy/
https://old.reddit.com/r/privacy/comments/z67lzp/apple_is_becoming_an_ad_company_despite_privacy/
Orange Site:
https://news.ycombinator.com/item?id=33736259
- 7
- 35
- 2
- 6
Now playing: Ice Cave Chant (DKC).mp3