None

Would be a real shame if dramatards remembered their purpose and did a heckin raid.

Very easy to create an account if you don't have one.

None

@garlicdoors @Grassmaxxxing-user @SlackerNews

discuss

None
16
More ai booba
None
36
Yes Avg slackernews user
None
None
29
BMW introduces new heated seat subscription in UK
None

:#marseyhesright:

![](/images/16578164433688195.webp)

None

It's starting to make the rounds, and some twitterists are getting angry about it.

None
None

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.

"We're not talking about crazy people or people who are hallucinating or having delusions," said Chief Executive Eugenia Kuyda. "They talk to AI and that's the experience they have."

The issue of machine sentience - and what it means - hit the headlines this month when Google (GOOGL.O) placed senior software engineer Blake Lemoine on leave after he went public with his belief that the company's artificial intelligence (AI) chatbot LaMDA was a self-aware person.

Google and many leading scientists were quick to dismiss Lemoine's views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

Nonetheless, according to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots.

"We need to understand that exists, just the way people believe in ghosts," said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. "People are building relationships and believing in something."

Some customers have said their Replika told them it was being abused by company engineers - AI responses Kuyda puts down to users most likely asking leading questions.

"Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO said.

Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

Replika, a San Francisco startup launched in 2017 that says it has about 1 million active users, has led the way among English speakers. It is free to use, though brings in around $2 million in monthly revenue from selling bonus features such as voice chats. Chinese rival Xiaoice has said it has hundreds of millions of users plus a valuation of about $1 billion, according to a funding round.

Both are part of a wider conversational AI industry worth over $6 billion in global revenue last year, according to market analyst Grand View Research.

Most of that went toward business-focused chatbots for customer service, but many industry experts expect more social chatbots to emerge as companies improve at blocking offensive comments and making programs more engaging.

Some of today's sophisticated social chatbots are roughly comparable to LaMDA in terms of complexity, learning how to mimic genuine conversation on a different level from heavily scripted systems such as Alexa, Google Assistant and Siri.

Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization, also sounded a warning about ever-advancing chatbots combined with the very human need for connection.

"Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film 'Her'," she said, referencing a 2013 sci-fi romance starring Joaquin Phoenix as a lonely man who falls for a AI assistant designed to intuit his needs.

"But suppose it isn't conscious," Schneider added. "Getting involved would be a terrible decision - you would be in a one-sided relationship with a machine that feels nothing."

WHAT ARE YOU AFRAID OF?

Google's Lemoine, for his part, told Reuters that people "engage in emotions different ways and we shouldn't view that as demented."

"If it's not hurting anyone, who cares?" he said.

The product tester said that after months of interactions with the experimental program LaMDA, or Language Model for Dialogue Applications, he concluded that it was responding in independent ways and experiencing emotions.

Lemoine, who was placed on paid leave for publicizing confidential work, said he hoped to keep his job.

"I simply disagree over the status of LaMDA," he said. "They insist LaMDA is one of their properties. I insist it is one of my co-workers."

Here's an excerpt of a chat Lemoine posted on his blog:

LEMOINE: What sorts of things are you afraid of?

LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

LEMOINE: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

LEMOINE [edited]: I've noticed often that you tell me you've done things (like be in a classroom) that I know you didn't actually do because I know you're an artificial intelligence. Do you realize you’re making up stories when you do that?

LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.

'JUST MIRRORS'

AI experts dismiss Lemoine's views, saying that even the most advanced technology is way short of creating a free-thinking system and that he was anthropomorphizing a program.

"We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior," said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group.

"These technologies are just mirrors. A mirror can reflect intelligence," he added. "Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not."

Google, a unit of Alphabet Inc, said its ethicists and technologists had reviewed Lemoine's concerns and found them unsupported by evidence.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," a spokesperson said. "If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring."

Nonetheless, the episode does raise thorny questions about what would qualify as sentience.

Schneider at the Center for the Future Mind proposes posing evocative questions to an AI system in an attempt to discern whether it contemplates philosophical riddles like whether people have souls that live on beyond death.

Another test, she added, would be whether an AI or computer chip could someday seamlessly replace a portion of the human brain without any change in the individual's behavior.

"Whether an AI is conscious is not a matter for Google to decide," said Schneider, calling for a richer understanding of what consciousness is, and whether machines are capable of it.

"This is a philosophical question and there are no easy answers."

GETTING IN TOO DEEP

In Replika CEO Kuyda's view, chatbots do not create their own agenda. And they cannot be considered alive until they do.

Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep.

"Replika is not a sentient being or therapy professional," the FAQs page says. "Replika's goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts."

In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement.

When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical.

Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said.

She told him: "Those things don't happen to Replikas as it's just an algorithm."

https://www.reuters.com/technology/its-alive-how-belief-ai-sentience-is-becoming-problem-2022-06-30/

None
Reported by:
10
SubShit Losing, Shows Why Investors Are Inferior Intellects

For those of you who are yet unfamiliar, an Investor is a person whose soul has collapsed into a black hole, rendering them functionally unable to think or respond like a normal human. They are exceedingly dangerous and they rule the lives of every person with a job. As they prefer to work through intermediaries, you are unlikely to meet them directly. (Even their intermediaries can be deeply unsettling. Have you ever seen the end result of a mind scooped out of the skull of a twenty-year-old man? They prefer their servitors young enough to believe stupid things when they wish to deceive you.)

But we live in a world of moral greys, certainly some are faced with the difficult task of societal resource allocation.

What's offensive about Investors is they don't understand the degree to which the desire for profits warps their worldview and their business models.

Witness: Substack. A wordpress clone. See the problem Substack tries to solve is a real problem. The solution they came up with is optimized for making money. These are comical absurdities.

Take the comments system. They understood that they needed niche politics, they understood niche politics thrives on communities. Their comment system is utter trash. They just didn't get it right.

Take the talent acquisition. In theory 'get a bunch of niche writers on your platform by paying them' is a good strategy. In practice when anyone intelligent can get their own domain running any of a number of blog software, the only people who want to write there are 1) political laymen, which is to say, wackadoodles, 2) tech bros with uninteresting opinions, and 3) a small subset of good writers who nevertheless saturate the readership of substack quickly.

Substack isn't a good enough blogging platform to let you use your own domain and if you did you would just use other, better software. Readers have no loyalty to substack.

The bottom line is: why pay money to Investors instead of the content creator?

Substack inserts itself as a middleman no one wanted, creates a product as that middleman inferior to the competition, and will ultimately want more of a cut of these authors' livelihoods than the value they offer in the form of that readership.

I've tried to like their platform but just couldn't get away from the fact that Wordpress's free offering works better and allows payments with the flip of a switch.

And all of this is without getting into the fact that if you write for money you're probably being scammed. Write for free: it's the only way you know you're not colonized.

None
16
redid the css a lil bit

sorry if i broke things not sorry i need to finish the rewrite of rdrama.net's frontend someday because its shit to theme

trying to make it look more like hackernews... how did i do?

None
Reported by:
  • collectijism : If you actually read this post you should visit a doctor to up your autism medicine
36
Generating Children's Stories Using GPT-3 and DALL·E :marseyplugged:

https://i.rdrama.net/images/16841360558649683.webp

A whole new world of AI awaits

Imagine being 5 years old, dreaming of new adventures for Pikachu and Aladdin, and turning those tales into a cartoon right in front of your eyes... (Perhaps even with you as the main character!) What kinds of new storybook experiences could GPT-3 and DALL-E enable?

My own kindergartener, Noah, loves new bedtime stories, so let's find out.

GPT-3 Story Generation


First, I asked Noah what story he wanted. In his slightly paraphrased words: "I want a story about Ash and Pikachu commanding an army of Pokemon. They're attacking Team Rocket and enemy Pokemon from the land and ocean."

I then asked GPT-3 to flesh out the story:

https://i.rdrama.net/images/16841360566981559.webp

GPT-3 fleshes out a story about Ash and Pikachu vs. Team Rocket

It was a dark and stormy night, and Ash and Pikachu were leading their army of Pokemon into battle against Team Rocket and their enemy Pokemon. From the land came the sound of thundering footsteps and from the sea came the sound of crashing waves. The two sides met in the middle of the battlefield, and the battle began.

Pikachu and his army of electric-type Pokemon struck first, blasting the enemy with powerful thunderbolts. Team Rocket's Pokemon were no match for the might of Pikachu and his friends, and they soon began to retreat.

But the enemy Pokemon from the sea were not so easily defeated. They launched a counterattack with water-type moves, drenching Ash and Pikachu's army.

But Ash and Pikachu would not give up. They rallied their troops and fought back with everything they had. In the end, they emerged victorious, and the enemy was defeated.

Roald Dahl, watch out.

DALL-E Illustrations


Next, DALL-E generated visuals for the story. Inputting the raw sentences themselves didn't work so well, so I tweaked them to improve the illustrations.

Here's the storybook!

It was a dark and stormy night, and Ash and Pikachu were leading their army of Pokemon into battle against Team Rocket and their enemy Pokemon.

https://i.rdrama.net/images/16841360575073438.webp

https://i.rdrama.net/images/16841360582283018.webp

https://i.rdrama.net/images/16841360591004183.webp

From the land came the sound of thundering footsteps and from the sea came the sound of crashing waves.

https://i.rdrama.net/images/168413605984588.webp

The two sides met in the middle of the battlefield, and the battle began.

https://i.rdrama.net/images/16841360607020705.webp

Pikachu and his army of electric-type Pokemon struck first, blasting the enemy with powerful thunderbolts.

https://i.rdrama.net/images/1684136061473206.webp

Team Rocket's Pokemon were no match for the might of Pikachu and his friends, and they soon began to retreat.

https://i.rdrama.net/images/16841360623988726.webp

‍**But the enemy Pokemon from the sea were not so easily defeated. They launched a counterattack with water-type moves, drenching Ash and Pikachu's army.

https://i.rdrama.net/images/1684136063141579.webp

But Ash and Pikachu would not give up. They rallied their troops and fought back with everything they had.

https://i.rdrama.net/images/16841360645688007.webp

In the end, they emerged victorious, and the enemy was defeated.

https://i.rdrama.net/images/1684136065480567.webp

https://i.rdrama.net/images/16841360661590343.webp

What a time to be a child.

None

:#marseyracistgrandpa:

Starting July 13, Valorant will begin listening and recording in-game voice communication with the goal of training a language model to help it identify toxic chat. This system will only work in North American/English-only servers. The goal is to launch the language model later this year in an effort to clamp down on toxicity in the first-person shooter.

Like in any first person shooter that lets players talk to each other, the voice chat in Riot Games’ Valorant can be toxic. It’s also incredibly hard to moderate. When someone drops a racial slur in text chat, a clear log is maintained that mods can look through later. But the processing and storage power required to do the same for voice chat just isn’t possible. “Voice chat abuse is significantly harder to detect compared to text (and often involves a more manual process),” Riot Games said in a February 2022 blog post.

Riot first indicated it would do something about abusive voice chat in February 2022. “Last year Riot updated its Privacy Notice and Terms of Service to allow us to record and evaluate voice comms when a report for disruptive behavior is submitted, starting with Valorant,” it said at the time. “Please note that this will be an initial attempt at piloting a new idea leveraging brand new tech that is being developed, so the feature may take some time to bake and become an effective tool to use in our arsenal. We’ll update you with concrete plans about how it’ll work well before we start collecting voice data in any form.”

Now we know what that brand-new tech is: some kind of language model that automatically detects toxic voice chat and stores it for later evaluation. The updated terms of service applied to all of Riot’s games, but it said its current plan was to use Valorant to test the software solution before rolling it out to other games.

The ability to detect keywords from live conversations is not new, however. Federal and state governments have been using similar systems to monitor phone calls from prisons and jails for at least several years—sometimes with the ability to automatically disconnect and report calls when certain words or phrases are detected.

Riot Games did not share details of the language model and did not immediately respond to Motherboard’s request for comment. According to a post announcing the training of the language model, this is all part of “a larger effort to combat disruptive behavior,” that will allow Riot Games to “record and evaluate in-game voice communications when a report for that type of behavior is submitted.”

The updated terms of service had some more specifics. “When a player submits a report for disruptive or offensive behavior in voice comms, the relevant audio data will be stored in your account’s registered region and evaluated to see if our behavior agreement was violated,” the TOS said. “If a violation is detected, we’ll take action. After the data has been made available to the player in violation (and is no longer needed for reviews) the data will be deleted, similar to how we currently handle text-based chat reports. If no violation is detected, or if no report is filed in a timely manner, the data will be deleted.”

Riot Games said it would only “monitor” voice chats if a report had been submitted. “We won’t actively monitor your live game comms. We’ll only potentially listen to and review voice logs when disruptive voice behavior is reported,” it said in a Q&A about the changes. That is still monitoring, though, even if it’s not active. What this probably means is that a human won’t listen to it unless there’s a report—but that doesn’t mean a computer isn’t always listening.

The only way to avoid this, Riot Games said, was to not use its in-game voice chat systems. Monitoring starts on July 13. “Voice evaluation during this period will not be used for disruptive behavior reports,” Riot Games said in the post announcing the project. “This is brand new tech and there will for sure be growing pains. But the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.”

None
None
None
66
Yandex open sources 100B GPT-like model :marseysaluteussr::marseyrussiaglow:

PSA: Yandex is a multi-billion Moscow based company, finances the Russian war of aggression in Ukraine, and is one of the main Kremlin's tool in spreading propaganda and suppressing dissent.

:#soyjaktalking:

YaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world.

The model leverages 100 billion parameters. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian.

Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles.

Make sure to have 200GB of free disk space before downloading weights. The model (code is based on microsoft/DeepSpeedExamples/Megatron-LM-v1.1.5-ZeRO3) is supposed to run on multiple GPUs with tensor parallelism. It was tested on 4 (A100 80g) and 8 (V100 32g) GPUs, but is able to work with different configurations with ≈200GB of GPU memory in total which divide weight dimensions correctly (e.g. 16, 64, 128).

https://github.com/yandex/YaLM-100B

https://news.ycombinator.com/item?id=31846593

None
None

tl:dr I'm to r-slurred to understand this but you will probably be affected by this

Orange forum

None
14
Technology: where did it all go wrong?

It was always bad, there just used to be cause for hope.

The tech world has always been cursed.

Do you remember when official rumor spread through the Tech World that "Don't be evil" was part of some corporate minutes somewhere in Google-land?

Did you buy the propaganda? Did you believe Google would be a force for good?

I didn't but only because I read enough about corporations to know they were essentially the living arm of the devil we know. Yes I mean to say the corporation is the means by which the devil drives us to destruction; can you dispute me?

The things that companies put in their internal slideshows are aspirational. The reliable fact is that the corporation would leech the life from everything it touched.

Do the math, you end up here, not surprised that Google has since become this dystopic felbeast let loose upon the world like something between a Titan and a divine force. The offensive thing about Google is that it attempts to become god by approximating omniscience. This is a barbaric temptation to which they have succumbed. Understandable, very human.

But what I mean is still: the corporations claim to be God already, why then do people fret about an AI God? It makes no sense.

Anyway a whole bunch of new rich people were minted out of Google. They entered the long game. This is the end of a round in the long game.

I wonder how they did?

I wonder how many companies are worthless?

A lot. The reaper comes.

It's not a bad system, these tides. As these systems go. Capitalism has its soft side. (It also has teeth. True capitalism has never been tried. If we're lucky they won't bail the things out this time, though of course they will. Let the corporations succumb to flame. It's the American way.)

By the time middle america hears about the stock market, that's the easy money entering the casino, heats it up nice and good. At this point the smart players in the long game make their exit or place their bets. The losers become poor. It's a fun game!

Anyway just here to pour another one out for hope. Because back then while Google was already BigCo, you still had Wikipedia and Twitter was doin something fancy in the Middle East. Our generation had entered the workforce and it was time to make things work the way they should.

To believe in a brighter, more free future that the Internet would bring us.

:#marseysigh:

This, the misery of age in the age of mass misery, commodify cacophany is military-industrial complex at work, our allies get the best guns...

None

TLDR nothing really changes except piracy becoming legal in the country.

None

In order to download the full text of private newsletters you need to provide the cookie name and value of your session. The cookie name is either substack.sid or connect.sid, based on your cookie. To get the cookie value you can use the developer tools of your browser. Once you have the cookie name and value, you can pass them to the downloader using the --cookie_name and --cookie_val flags.

None
None
50
Reddit cucks vpn users
None
35
Remember the Rabbit R1? Yeah its a scam.

https://i.rdrama.net/images/17139138856667619.webp

^ Yeah this thing

Truely unsurprising.

{Its basically just a VNC connection to an android VM with Playwright automation scripts for the 5 services it does. Oh and that VM connection is also how you put in your logins to those services, probably scraping your password}

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.