None
None
14
Le [Major Tech brand or product] shill

You will never create a shill account as great as @BraveShill. What shitty trend did I start or was it @pizzashill that started it :marseyhmm:?

None

TL;DR:

Last weekend's deplatforming of Kiwi Farms, an internet forum known for encouraging the doxxing and harassment of disfavored figures, has brought the eternal question of online content regulation once again to the fore.

Almost universally, they don't discuss enforcement.

Yet even if this plan had a wider application, it's still not clear how reviving the Obscenity Prosecution Task Force, as the lawmakers seem to want, would work in practice—and that brings us to the crux of the enforcement question.

I suspect this distinction, borrowed from the drug war, is what many would-be prohibitionists of online pornography and other objectionable content have in mind: Go after the dealers, not the users.

But when their erstwhile users make a new site, or a thousand new sites—and they will—will they all get taken down too?

Would we imprison people over pornography?


Last weekend's deplatforming of Kiwi Farms, an internet forum known for encouraging the doxxing and harassment of disfavored figures, has brought the eternal question of online content regulation once again to the fore. In this case, the deplatforming was a private business decision by Cloudflare, a web services provider, following a pressure campaign led by trans activists. But a government takedown of the site would have had fans, too. Rep. Marjorie Taylor Greene (R–Ga.), who was swatted by a Kiwi Farms user, argued that it is a "failure of our government and failure of our law enforcement to not take down a website like that," "all of these types of groups need to be completely eradicated," and "they should not be allowed to exist."

Greene's remarks were brief and primarily concerned with attacking Democrats. But even if she'd spoken longer, my guess is she would have omitted a topic almost always ignored by proponents of banning objectionable content online: Enforcement.

That would be an issue with any plan for government content regulation, especially in forum-style sites like Kiwi Farms with a large base of users creating content, but this glaring absence is most obvious in proposals for banning pornography. I've read a lot of these proposals, researching the idea for a chapter I contributed to a forthcoming book on the digital public square. Almost universally, they don't discuss enforcement.

Senate candidate J.D. Vance (R–Ohio), for example, has endorsed the idea of a complete porn ban, but to my knowledge, he hasn't elaborated on enforcement at all. Former First Things senior editor Matthew Schmitz's 2016 case in The Washington Post for banning porn likened it to "bans" on murder and r*pe, which raises the specter of prison time, but he lets the implication slip away without explicit comment.

A 2019 First Things suggestion of digital "zoning"—say, by limiting all porn to regulated .xxx domains—says "all pornography and indecent material that showed up outside the zone (for example, on a website with a .com or .org domain) could be deemed illegal and referred to the DOJ for prosecution." But it doesn't say who's doing those referrals and what consequences the prosecution should bring. Likewise, a contemporary argument from The Daily Wire's Matt Walsh for "much heavier regulation" or an outright ban of online porn is long on rationale but silent on enforcement.

A 2021 piece at National Review seeks to ban only free online porn and delves into constitutionality in a way the Schmitz article does not. But it too is silent on enforcement mechanisms. And an article published this past June at Fox News contends "it is time to tear down the virtual porn theaters" but devotes no space to explaining how this would be done or what punishment violators should face.

In other proposals, it's all more of the same. The only porn regulation proposal which reliably comes with any discussion of enforcement is increased prosecution of obscenity laws already on the books, as advocated most prominently in a 2019 letter to the Justice Department from several members of Congress. Obscenity, in constitutional jurisprudence, is a far narrower category than the average non-lawyer would suppose. Yet even if this plan had a wider application, it's still not clear how reviving the Obscenity Prosecution Task Force, as the lawmakers seem to want, would work in practice—and that brings us to the crux of the enforcement question.

In its original incarnation, from 2005 to 2011, the task force prosecuted producers and distributors of obscenity. I suspect this distinction, borrowed from the drug war, is what many would-be prohibitionists of online pornography and other objectionable content have in mind: Go after the dealers, not the users.

But that division is not so neatly drawn when it comes to online content (and drugs, but that's another point for another day). You can take down the big targets, the Pornhubs and Kiwi Farms of the world, easily enough. Maybe you toss their owners in jail or hit them with big fines.

But when their erstwhile users make a new site, or a thousand new sites—and they will—will they all get taken down too? Will you get everyone who uploads a video or leaves a comment? Everyone whose internet history shows they've visited these sites? What about emailing the banned content to download for offline viewing—is that illegal too? What kind of mass surveillance apparatus are you willing to build to catch everyone who bypasses the ban?

And if you catch them, what then? Would the government fine people? Garnish their wages? Put them on a s*x offender registry? Take away their children?

Would we imprison people over pornography? For how long? (Remember, we're talking about a ban on all porn or other objectionable but currently legal content, not already and rightly illegal things like child pornography, nonconsensual pornography, or the swatting to which Greene was subjected.) Is a family better off if the dad gets three strikes and goes to prison for a year and can't find a job when he gets out? Will putting a young man addicted to pornography in the criminogenic environment of prison make him more or less likely to get his life on track?

None of this is to suggest porn is a good thing or that I'm sorry to see Kiwi Farms go. I believe pornography is evil, and from what I know of Kiwi Farms, good riddance. But as Yale law professor Stephen L. Carter observed in 2014, "making an offense criminal" doesn't simply show "how much we care about it." Ultimately, every ban creates the possibility "that the police will go armed to enforce it."

Don't get squeamish, prohibitionists. Tell us what stick you have in mind. Content ban plans can't be taken seriously until you explain what, exactly, you want the state to do to people who break your rules.

None

orange site

  • DOJ suit alleges Google’s exclusive deals lock out rivals

  • Company says phone makers, browsers want its search engine

If the DOJ makes Google stop paying companies to be their default search engine and ends up killing Mozzarella I'll c*m.

None

					
					

More: https://old.reddit.com/r/technology/comments/x7gky6/with_stable_diffusion_you_may_never_believe_what/

AI image generation is here in a big way. A newly released open source image synthesis model called Stable Diffusion allows anyone with a PC and a decent GPU to conjure up almost any visual reality they can imagine. It can imitate virtually any visual style, and if you feed it a descriptive phrase, the results appear on your screen like magic.

Some artists are delighted by the prospect, others aren't happy about it, and society at large still seems largely unaware of the rapidly evolving tech revolution taking place through communities on Twitter, Groomercord, and Github. Image synthesis arguably brings implications as big as the invention of the camera---or perhaps the creation of visual art itself. Even our sense of history might be at stake, depending on how things shake out. Either way, Stable Diffusion is leading a new wave of deep learning creative cowtools that are poised to revolutionize the creation of visual media.

The rise of deep learning image synthesis

Stable Diffusion is the brainchild of Emad Mostaque, a London-based former hedge fund manager whose aim is to bring novel applications of deep learning to the masses through his company, Stability AI. But the roots of modern image synthesis date back to 2014, and Stable Diffusion wasn't the first image synthesis model (ISM) to make waves this year.

In April 2022, OpenAI announced DALL-E 2, which shocked social media with its ability to transform a scene written in words (called a "prompt") into myriad visual styles that can be fantastic, photorealistic, or even mundane. People with privileged access to the closed-off tool generated astronauts on horseback, teddy bears buying bread in ancient Egypt, novel sculptures in the style of famous artists, and much more.

Not long after DALL-E 2, Google and Meta announced their own text-to-image AI models. MidJourney, available as a Groomercord server since March 2022 and open to the public a few months later, charges for access and achieves similar effects but with a more painterly and illustrative quality as the default.

Then there's Stable Diffusion. On August 22, Stability AI released its open source image generation model that arguably matches DALL-E 2 in quality. It also launched its own commercial website, called DreamStudio, that sells access to compute time for generating images with Stable Diffusion. Unlike DALL-E 2, anyone can use it, and since the Stable Diffusion code is open source, projects can build off it with few restrictions.

In the past week alone, dozens of projects that take Stable Diffusion in radical new directions have sprung up. And people have achieved unexpected results using a technique called "img2img" that has "upgraded" MS-DOS game art, converted Minecraft graphics into realistic ones, transformed a scene from Aladdin into 3D, translated childlike scribbles into rich illustrations, and much more. Image synthesis may bring the capacity to richly visualize ideas to a mass audience, lowering barriers to entry while also accelerating the capabilities of artists that embrace the technology, much like Adobe Photoshop did in the 1990s.

You can run Stable Diffusion locally yourself if you follow a series of somewhat arcane steps. For the past two weeks, we've been running it on a Windows PC with an Nvidia RTX 3060 12GB GPU. It can generate 512×512 images in about 10 seconds. On a 3090 Ti, that time goes down to four seconds per image. The interfaces keep evolving rapidly, too, going from crude command-line interfaces and Google Colab notebooks to more polished (but still complex) front-end GUIs, with much more polished interfaces coming soon. So if you're not technically inclined, hold tight: Easier solutions are on the way. And if all else fails, you can try a demo online.

How stable diffusion works

Broadly speaking, most of the recent wave of ISMs use a technique called latent diffusion. Basically, the model learns to recognize familiar shapes in a field of pure noise, then gradually brings those elements into focus if they match the words in the prompt.

To get started, a person or group training the model gathers images with metadata (such as alt tags and captions found on the web) and forms a large data set. In Stable Diffusion's case, Stability AI uses a subset of the LAION-5B image set, which is basically a huge image scrape of 5 billion publicly accessible images on the Internet. Recent analysis of the data set shows that many of the images come from sites such as Pinterest, DeviantArt, and even Getty Images. As a result, Stable Diffusion has absorbed the styles of many living artists, and some of them have spoken out forcefully against the practice. More on that below.

Next, the model trains itself on the image data set using a bank of hundreds of high-end GPUs such as the Nvidia A100. According to Mostaque, Stable Diffusion cost $600,000 to train so far (estimates of training costs for other ISMs typically range in the millions of dollars). During the training process, the model associates words with images thanks to a technique called CLIP (Contrastive Language--Image Pre-training), which was invented by OpenAI and announced just last year.

Through training, an ISM using latent diffusion learns statistical associations about where certain colored pixels usually belong in relation to each other for each subject. So it doesn't necessarily "understand" their relationship at a high level, but the results can still be stunning and surprising, making inferences and style combinations that seem very intelligent. After the training process is complete, the model never duplicates any images in the source set but can instead create novel combinations of styles based on what it has learned. The results can be delightful and wildly fun.

At the moment, Stable Diffusion doesn't care if a person has three arms, two heads, or six fingers on each hand, so unless you're a wizard at crafting the text prompts necessary to generate great results (which AI artists sometimes call "prompt engineering"), you'll probably need to generate lots of images and cherry-pick the best ones. Keep in mind that the more a prompt matches captions for known images in the data set, the more likely you'll get the result you want. In the future, it's likely that models will improve enough to reduce the need for cherry-picking---or some kind of internal filter will do the picking for you.

Ethical and legal concerns abound

As hinted above, Stable Diffusion's public release has raised alarm bells among people who fear its cultural and economic impact. Unlike DALL-E 2, Stable Diffusion's training data (the "weights") are available for anyone to use without any hard restrictions. The official Stable Diffusion release (and DreamStudio) includes automatic "NSFW" filters (nudity) and an invisible tracking watermark embedded in the images, but these restrictions can easily be circumvented in the open source code. This means Stable Diffusion can be used to create images that OpenAI currently blocks with DALL-E 2: propaganda, violent imagery, pornography, images that potentially violate corporate copyright, celebrity deepfakes, and more. In fact, there are already some private Groomercord servers dedicated to pornographic output from the model.

To be clear, Stable Diffusion's license officially forbids many of these uses, but with the code and weights out in the wild, enforcement will prove very difficult, if not impossible. When presented with these concerns, Mostaque said that he feels the benefits of having this kind of tool out in the open where it can be scrutinized outweigh the potential drawbacks. In a short interview, he told us, "We believe in individual responsibility and agency. We included an ethical use policy and cowtools to mitigate harm."

Also, Stable Diffusion has drawn the ire of artists on Twitter due to the model's ability to imitate the style of living artists, as mentioned above. (And despite the claims of some viral tweets, Stability AI has never advertised this ability. One of the most shared tweets mistakenly pulled from an independent study done by an AI researcher.) In the quest for data, the image set used to train Stable Diffusion includes millions of pieces of art gathered from living artists without consultation with the artists, which raises profound ethical questions about authorship and copyright. Scraping the data appears lawful by US legal precedent, but one could argue that the law might be lagging behind rapidly evolving technology that upends previous assumptions about how public data might be utilized.

As a result, if image synthesis technology becomes adopted by major corporations in the future (which may be coming soon---"We have a collaborative relationship with Adobe," says Mostaque), companies might train their own models based on a "clean" data set that includes licensed content, opt-in content, and public domain imagery to avoid some of these ethical issues, even if using an Internet scrape is technically legal. We asked Mostaque if he had any plans along these lines, and he replied, "Stability is working on a range of models. All models by ourselves and our collaborators are legal within their jurisdictions."

Another issue with diffusion models from all vendors is cultural bias. Since these ISMs currently work by scraping the Internet for images and their related metadata, they learn social and cultural stereotypes present in the data set. For example, early on in the Stable Diffusion beta on its Groomercord server, testers found that almost every request for a "beautiful woman" involved unintentional nudity of some kind, which reflects how Western society often depicts women on the Internet. Other cultural and racist stereotypes abound in ISM training data, so researchers caution that it should not be used in a production environment without significant safeguards in place, which is likely one reason why other powerful models such as DALLE-2 and Google's Imagen are still not broadly available to the public.

While concerns about data set quality and bias echo strongly among some AI researchers, the Internet remains the largest source of images with metadata attached. This trove of data is freely accessible, so it will always be a tempting target for developers of ISMs. Attempting to manually write descriptive captions for millions or billions of images for a brand-new ethical data set is probably not economically feasible at the moment, so it's the heavily biased data on the Internet that is currently making this technology possible. Since there is no universal worldview across cultures, to what degree image synthesis models filter or interpret certain ideas will likely remain a value judgment among the different communities that use the technology in the future.

What comes next

If historical trends in computing are any suggestion, odds are high that what now takes a beefy GPU will eventually be possible on a pocket smartphone. "It is likely that Stable Diffusion will run on a smartphone within a year," Mostaque told us. Also, new techniques will allow training these models on less expensive equipment over time. We may soon be looking at an explosion in creative output fueled by AI.

Stable Diffusion and other models are already starting to take on dynamic video generation and manipulation, so expect photorealistic video generation via text prompts before too long. From there, it's logical to extend these capabilities to audio and musicreal-time video games, and 3D VR experiences. Soon, advanced AI may do most of the creative heavy lifting with just a few suggestions. Imagine unlimited entertainment generated in real-time, on demand. "I expect it to be fully multi-modal," said Mostaque, "so you can create anything you can imagine, like the Star Trek holodeck experience."

ISMs are also a dramatic form of image compression: Stable Diffusion takes hundreds of millions of images and squeezes knowledge about them into a 4.2GB weights file. With the correct seed and settings, certain generated images can be reproduced deterministically. One could imagine using a variation of this technology in the future to compress, say, an 8K feature film into a few megabytes of text. Once that's the case, anyone could compose their own feature films that way as well. The implications of this technology are only just beginning to be explored, so it may take us in wild new directions we can't foresee at the moment.

Realistic image synthesis models are potentially dangerous for reasons already mentioned, such as the creation of propaganda or misinformation, tampering with history, accelerating political division, enabling character attacks and impersonation, and destroying the legal value of photo or video evidence. In the AI-powered future, how will we know if any remotely produced piece of media came from an actual camera, or if we are actually communicating with a real human? On these questions, Mostaque is broadly hopeful. "There will be new verification systems in place, and open releases like this will shift the public debate and development of these cowtools," he said.

That's easier said than done, of course. But it's also easy to be scared of new things. Despite our best efforts, it's difficult to know exactly how image synthesis and other AI-powered technologies will affect us on a societal scale without seeing them in wide use. Ultimately, humanity will adapt, even if our cultural frameworks end up changing radically in the process. It has happened before, which is why the Ancient Greek philosopher Heraclitus reportedly said, "The only constant is change."

In fact, there's a photo of him saying that now, thanks to Stable Diffusion.

https://arstechnica.com/information-technology/2022/09/with-stable-diffusion-you-may-never-believe-what-you-see-online-again/

None

Just wanted to express my gratitude and appreciettion of the hard and amazing work @dang does for this community, he is the glue that binds us together and on such a sad day, you appreciate the good things in life.

So thank you @dang.

:#marseybootlicker2:

None

:#marseyrave:

Oberlin College, known as a bastion of progressive politics, said on Thursday that it would pay $36.59 million to a local bakery that said it had been defamed and falsely accused of racism after a worker caught a Black student shoplifting.

That 2016 dispute with Gibson’s Bakery resulted in a yearslong legal fight and resonated beyond the small college town in Ohio, turning into a bitter national debate over criminal justice, race, free speech and whether the college had failed to hold students to account.

The decision by the college’s board of trustees, announced Thursday, came nine days after the Ohio Supreme Court had declined to hear the college’s appeal of a lower-court ruling.

“Truth matters,” Lee E. Plakas, the lawyer for the Gibson family, said in an email Thursday. “David, supported by a principled community, can still beat Goliath.”

In a statement, Oberlin said that “this matter has been painful for everyone.” It added, “We hope that the end of the litigation will begin the healing of our entire community.”

The college acknowledged that the size of the judgment, which includes damages and interest, was “significant.” But it said that “with careful financial planning,” including insurance, it could be paid “without impacting our academic and student experience.” Oberlin has a robust endowment of nearly $1 billion.

The case hinged on whether Oberlin officials had defamed the bakery by supporting students who accused it of racial profiling, and the verdict, essentially finding that the officials had done so, may make other colleges and universities think twice about joining student causes, legal experts said.

“Such a large amount is certainly going to make institutions around the country take notice, and to be very careful about the difference between supporting students and being part of a cause,” said Neal Hutchens, a professor of higher education at the University of Kentucky. “It wasn’t so much the students speaking; it’s the institution accepting that statement uncritically. Sometimes you have to take a step back.”

Professor Hutchens said it also made a difference that Gibson's was a small family business, not a large multinational corporation like Walmart or Amazon, which would be better able to sustain the economic losses from such a protest.

Oberlin is a small liberal arts college with a reputation for turning out students who are strong in the arts and humanities and for its progressive politics, leaning heavily on its history of being a stop on the Underground Railroad as well as one of the first colleges to admit Black students. Tuition at Oberlin is more than $61,000 a year, and the overall cost of attendance tops $80,000 a year. The college is also very much part of the town, which is economically dependent on the school and its students. The bakery, across the street from the college, sold donuts and chocolates, and was considered a must-eat part of the Oberlin dining experience.

The incident that started the dispute unfolded in November 2016, when a student tried to buy a bottle of wine with a fake ID while shoplifting two more bottles by hiding them under his coat, according to court papers.

Allyn Gibson, a son and grandson of the owners, who is white, chased the student out onto the street, where two of his friends, also Black students at Oberlin, joined in the scuffle. The students later pleaded guilty to various charges.

That altercation led to two days of protests; several hundred students gathered in front of the bakery, accusing it of having racially profiled its customers, according to court papers.

The lawsuit filed by Gibson's contended that Oberlin had defamed the bakery when the dean of students, Meredith Raimondo, and other members of the administration took sides in the dispute by attending the protests, where fliers, peppered with capital letters, urged a boycott of the bakery and said that it was a "RACIST establishment with a LONG ACCOUNT OF RACIAL PROFILING and DISCRIMINATION."

Gibson's also presented testimony that Oberlin had stopped ordering from the bakery but had offered to restore its business if charges were dropped against the three students or if the bakery gave students accused of shoplifting special treatment, which it refused to do.

The store said that the college's stance had driven customers away, for fear of being perceived as supporting an establishment that the college had tarred as racist.

Oberlin disputed some aspects of that account and countered that students were exercising their First Amendment right to free speech. The administration said it had only been trying to keep the peace. The college's court papers also said that Allyn Gibson was trained in martial arts and had brought public criticism on the store by chasing the student out of the store and into public view.

In the spring, a three-judge panel of the Ohio Court of Appeals confirmed the jury's finding, after a six-week trial, that Oberlin was liable for libel, intentional infliction of emotional distress and intentional interference with a business relationship --- that it had effectively defamed the business by siding with the protesters. The original jury award was even higher, at $44 million in punitive and compensatory damages, which was reduced by a judge. The latest amount consists of about $5 million in compensatory damages, nearly $20 million in punitive damages, $6.5 million in attorney's fees and almost $5 million in interest.

In its ruling, the Court of Appeals agreed that students had a right to protest. But the court said that the flier and a related student senate resolution --- which said that the store had a history of racial profiling --- were not constitutionally protected opinion.

"The message to other colleges is to have the intestinal fortitude to be the adult in the room," Mr. Plakas said in an interview after the jury had awarded damages in June 2019.

After the 2019 jury award against Oberlin, Carmen Twillie Ambar, the college president, said that the case was far from over and that "none of this will sway us from our core values." The college said then that the bakery's "archaic chase-and-detain policy regarding suspected shoplifters was the catalyst for the protests."

But in its statement on Thursday, Oberlin hinted that the protracted and bitter fight had undermined its relationship with the people and businesses in the surrounding community.

"We value our relationship with the city of Oberlin," its statement said. "And we look forward to continuing our support of and partnership with local businesses as we work together to help our city thrive."

https://www.nytimes.com/2022/09/08/us/oberlin-bakery-lawsuit.html

None

Orange site: https://news.ycombinator.com/item?id=32771071

Although tech platforms can help keep us connected, create a vibrant marketplace of ideas, and open up new opportunities for bringing products and services to market, they can also divide us and wreak serious real-world harms. The rise of tech platforms has introduced new and difficult challenges, from the tragic acts of violence linked to toxic online cultures, to deteriorating mental health and wellbeing, to basic rights of Americans and communities worldwide suffering from the rise of tech platforms big and small.

Today, the White House convened a listening session with experts and practitioners on the harms that tech platforms cause and the need for greater accountability. In the meeting, experts and practitioners identified concerns in six key areas: competition; privacy; youth mental health; misinformation and disinformation; illegal and abusive conduct, including sexual exploitation; and algorithmic discrimination and lack of transparency.

One participant explained the effects of anti-competitive conduct by large platforms on small and mid-size businesses and entrepreneurs, including restrictions that large platforms place on how their products operate and potential innovation. Another participant highlighted that large platforms can use their market power to engage in rent-seeking, which can influence consumer prices.

Several participants raised concerns about the rampant collection of vast troves of personal data by tech platforms. Some experts tied this to problems of misinformation and disinformation on platforms, explaining that social media platforms maximize "user engagement" for profit by using personal data to display content tailored to keep users' attention---content that is often sensational, extreme, and polarizing. Other participants sounded the alarm about risks for reproductive rights and individual safety associated with companies collecting sensitive personal information, from where their users are physically located to their medical histories and choices. Another participant explained why mere self-help technological protections for privacy are insufficient. And participants highlighted the risks to public safety that can stem from information recommended by platforms that promotes radicalization, mobilization, and incitement to violence.

Multiple experts explained that technology now plays a central role in access to critical opportunities like job openings, home sales, and credit offers, but that too often companies' algorithms display these opportunities unequally or discriminatorily target some communities with predatory products. The experts also explained that that lack of transparency means that the algorithms cannot be scrutinized by anyone outside the platforms themselves, creating a barrier to meaningful accountability.

One expert explained the risks of social media use for the health and wellbeing of young people, explaining that while for some, technology provides benefits of social connection, there are also significant adverse clinical effects of prolonged social media use on many children and teens' mental health, as well as concerns about the amount of data collected from apps used by children, and the need for better guardrails to protect children's privacy and prevent addictive use and exposure to detrimental content. Experts also highlighted the magnitude of illegal and abusive conduct hosted or disseminated by platforms, but for which they are currently shielded from being held liable and lack adequate incentive to reasonably address, such as child sexual exploitation, cyberstalking, and the non-consensual distribution of intimate images of adults.

The White House officials closed the meeting by thanking the experts and practitioners for sharing their concerns. They explained that the Administration will continue to work to address the harms caused by a lack of sufficient accountability for technology platforms. They further stated that they will continue working with Congress and stakeholders to make bipartisan progress on these issues, and that President Biden has long called for fundamental legislative reforms to address these issues.

Attendees at today's meeting included:

  • Bruce Reed, Assistant to the President & Deputy Chief of Staff

  • Susan Rice, Assistant to the President & Domestic Policy Advisor

  • Brian Deese, Assistant to the President & National Economic Council Director

  • Louisa Terrell, Assistant to the President & Director of the Office of Legislative Affairs

  • Jennifer Klein, Deputy Assistant to the President & Director of the Gender Policy Council

  • Alondra Nelson, Deputy Assistant to the President & Head of the Office of Science and Technology Policy

  • Bharat Ramamurti, Deputy Assistant to the President & Deputy National Economic Council Director

  • Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology

  • Tarun Chhabra, Special Assistant to the President & Senior Director for Technology and National Security

  • Dr. Nusheen Ameenuddin, Chair of the American Academy of Pediatrics Council on Communications and Media

  • Danielle Citron, Vice President, Cyber Civil Rights Initiative, and Jefferson Scholars Foundation Schenck Distinguished Professor in Law Caddell and Chapman Professor of Law, University of Virginia School of Law

  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology

  • Damon Hewitt, President and Executive Director, Lawyers' Committee for Civil Rights Under Law

  • Mitchell Baker, CEO of the Mozzarella Corporation and Chairwoman of the Mozzarella Foundation

  • Karl Racine, Attorney General for the District of Columbia

  • Patrick Spence, Chief Executive Officer, Sonos

Principles for Enhancing Competition and Tech Platform Accountability

With the event, the Biden-Harris Administration announced the following core principles for reform:

  1. Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in the development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage. We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.

  2. Provide robust federal protections for Americans' privacy. There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.

  3. Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.

  4. Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials. The President has long called for fundamental reforms to Section 230.

  5. Increase transparency about platform's algorithms and content moderation decisions.  Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans' lives and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.

  6. Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.

None
None

I will never understand redditors jerking themselves into a frothing mixture of c*m and shit over RCS

r/android

I know the EU is working on legislation that touches on cross platform messaging, I hope it comes to fruition.

US redditard hoping the EU will fix a problem that, at this point, only exists in the US

Buy your mom an iPhone - Tim Cook

I love Apple but I really hope this bites him back. Gosh this is most anti trust thing I have read in a while.

Android user claiming to love Apple. Very believable. Also hopes they get reamed for anti-trust for telling people to buy their product. The smartest redditor.

r/apple

Tim is well aware that offering iMessage on Android or adopting RCS would cause a significant portion of their customer base to consider Android, and would do nothing to attract users to iOS. It would be like Microsoft suddenly offering DirectX or ActiveX plugins on macOS in the mid-2000s.

This dude believes that 25%+ iPhone users will switch to Android because of this feature that no normie has ever heard of

Buy your mom an iPhone

Capitalism at work, folks!


He’s being a bit classist there.

:marseyrevolution:

None

challenging the United States Treasury Department’s sanctions of the Tornado Cash smart contracts and asking the Court to remove them from the U.S. sanctions list.

:#marseysal:

None
21
Dr Richard Stallman publishes a GNU C Manual :marseyneko:

compiled PDF by some anon: https://www.cyberciti.biz/files/GNU-C-Language-Manual/GNU%20C%20Language%20Manual.pdf

@nekobit

None
20
Archivemaxxing

You know how Snappy has 3 links? What if you archived one of those links, then archived that link using a different archive link and so on. How deep could you go?

Here's an example: https://web.archive.org/web/20220908013516/https://ghostarchive.org/archive/UHhck

None

					
					
					
	

				
None

Orange site

None
None
11
Apple event

Darn burgers need to change their presentation. Watching it I feel like I am 10 again 0 evolution on how they present like they say “the new Apple Watch is the best we yet made” I mean of course is the best you it’s your newest and they are using same phrases every fricking year aaaa

They had black waman, Asian waman, white waman and pregnant waman, mb I missed waman with pipi ?

None

Orange site: https://news.ycombinator.com/item?id=32745602

The 2022 Russian invasion of Ukraine emphasises the role social media plays in modern-day warfare, with conflict occurring in both the physical and information environments. There is a large body of work on identifying malicious cyber-activity, but less focusing on the effect this activity has on the overall conversation, especially with regards to the Russia/Ukraine Conflict. Here, we employ a variety of techniques including information theoretic measures, sentiment and linguistic analysis, and time series techniques to understand how bot activity influences wider online discourse. By aggregating account groups we find significant information flows from bot-like accounts to non-bot accounts with behaviour differing between sides. Pro-Russian non-bot accounts are most influential overall, with information flows to a variety of other account groups. No significant outward flows exist from pro-Ukrainian non-bot accounts, with significant flows from pro-Ukrainian bot accounts into pro-Ukrainian non-bot accounts. We find that bot activity drives an increase in conversations surrounding angst (with p = 2.450 x 1e-4) as well as those surrounding work/governance (with p = 3.803 x 1e-18). Bot activity also shows a significant relationship with non-bot sentiment (with p = 3.76 x 1e-4), where we find the relationship holds in both directions. This work extends and combines existing techniques to quantify how bots are influencing people in the online conversation around the Russia/Ukraine invasion. It opens up avenues for researchers to understand quantitatively how these malicious campaigns operate, and what makes them impactful.

https://i.rdrama.net/images/1684135304753438.webp

https://www.docdroid.net/cshXn1q/220807038-pdf

None
67
Kiwi Farms has been removed from the Internet Archive
None
None

TL;DR: CLEAN IT UP JANNY 🤣🤣🤣

One of OP’s posts was removed by the moderators of /r/emulation, a subreddit about emulation of older electronic devices, usually related to old video game consoles. Apparently, it linked to a site that contained links to copyrighted material, AKA video game roms. In the comment chain, OP insists the site they linked only provided ROM metadata and was not related to ROM downloads. One of the mods who removed the post appears multiple times in this thread and accuses OP of misrepresenting the content of his removed post, which he contends linked to a ROM download site.

Make sure you check out the entire comment chains, I’ve left a number of comments out for brevity’s sake, but it’s all definitely worth a read!

Original Comment-

How do I become moderator here? I've been told this sub is understaffed, but there were two moderators berating me for my post instead of doing anything productive.

How do I contribute so this sub so moderation team will get, on average, less entitled to their position but more constructively helpful?

Reply chain #1-

One of the mods OP was arguing with about his removed post steps into the ring.

A good start would be contributing to the community and not ranting that everyone is harassing you because a rom post was removed. You were told why and it wasn't good enough. You are skipping past demanding to speak to the manager, you want to BE the manager.

Edit: I'm going to bed. Ain't nobody got time for this Karen shit at this hour.

I did actually contribute with a list of my known rom information websites, but your buddy decided to remove it xD Please don't tell me what I do wrong because those are all lies xD

No I wasn't told why. Two people berated me for my attitude, and zero constructive feedback was provided. I'm pretty sure I'm not the Karen here xD :marseyxd:

What is the acceptable title of the post your choosy lordship?

Nighty night.

And please, refrain from responding if all you can say is basically gaslamping, just because you don't want to put any work in this sub.

I did contribute and declared my availability to contribute even more, in ways two moderators already failed spectacularly, writhing around the topic instead of offering anything constructive.

I don't want to be the manager, I need to be The Manager because two moderators I spoke to already, leave MUCH to be desired. And I'm pretty sure I wouldn't harass people with moderation privileges without providing any constructive feedback.

What's more, I definitely wouldn't feel too entitled, important or emotionally fragile for a debate with different opinions and would not straight out offend people like that because of my own personal mental projections. xD :marseyxd:

I'm already a better moderator, just lacking proper permission privileges. Yet.

Reply chain #2-

Someone tries to hint to OP that their behavior is coming off as rude. Op doesn't care.

“less entitled to their position but more constructively helpful?” This is totally the way to get people to like you

Does it look like I care about being liked? Objective quality is what matters, not subjective feelings. Respect will come later as a result of actions, not insincere sentences.

Let's just say I wouldn't hire you if you came in for a job interview with that attitude

Have you been harrassed by moderation team too? You don't have to suck up to aberrant mentality and be toxic to other people just because they treated you badly too. I'm on your side.

Eventually the Mod he was arguing with earlier in the same thread steps in and joins the foray:

Making you a mod would not be an improvement. At the slightest disagreement you make numerous accusations of harassment and hurt feelings, demanding to be put in charge despite never interacting with the community before in any way, you have in this very question thread demonstrated a lack of understanding of the subject matter. That is not even counting how you misrepresented your own original post linking to a rom site.

Learn more about the subject matter. -Positively- interact with the community more. Drop the persecution complex. Then maybe we will reevaluate your request.

Till then, thank you for your interest but we will keep looking for a better suited candidate.

OP strikes back, shit really hits the fan here:

Nah, you're just scared that some base random will be such a massive game changer, that your fragile sense of self worth will be utterly destroyed. And you are right. I would totally prove, by actions, that current moderation team is complacent and lazy.

The fact that you're making up accusations is basically proof of that. I did tell you not to interact with me until we find actual competent moderator. Did I? Where is your understanding?

But eventually you will find someone so lazy, passive, meek and deprived of any semblances of personality, that it will be a perfect pick for what you call a "team".

Barbs are traded back and forth, and eventually the Mod gives up trying to reason with OP:

Alrighty I've given you more than enough chances to act like a halfway decent person. Its time for you to go away for a bit. Scream harassment into the void if you like. I've run out of patience for this Karen persecution complex game.

S R D

None
None

How will Brazilian Apple Fanboys cope?

None
None

:#marseysoycry:

Over the last few days, there's been a lot of discussion about kiwifarms being taken offline by Cloudflare. As is expected from HN, the average response was one appalled by the apparent free speech violation, and a general concern for the precedent being set.

While I understand the sentiment, and have thought the same in the past ( with Aaron Swartz or Chelsea Manning); I want to pose an alternative question: What's the precedence set by continuing to do business with kiwifarms?

Over the last few days there's been a lot I've wanted to say, but I felt like I couldn't speak. I've been on HN for the better part of a decade, but I can't say anything as myself. Instead I need to use a throwaway account and only connect over a VPN, because I'm a trans woman talking about kiwifarms.

I've seen how dangerous it can be for a trans woman to stick her head up. I've watched friends and strangers alike be harassed, attacked, SWAT'd, and doxxed. Not public figures (though they don't deserve death threats either), just regular trans people trying to live their lives and speak their experiences.

I no longer feel like I can speak up, for fear of illegal reprisals. Why should they be allowed to infringe on my rights? Why should I have to hide?

Cloudflair's decision to stop doing business with kiwifarms is a step towards free speech, not away.

:#marseysoycry::#marseysoycry::#marseysoycry:

Now playing: Busted Bayou (Tropical Freeze).mp3

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.