Jump in the discussion.

No email address required.

more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.

shut the frick up

“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”

Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”.

i don't think your concerns over "sexist or racist biases programmed in AI" is any less unhinged :marseyfuckyou:

omfg the cited paper: https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

https://i.rdrama.net/images/1680381901603651.webp

https://i.rdrama.net/images/1680381947588103.webp https://i.rdrama.net/images/16803819961482427.webp

:marseysjw: :marseyshooting:

Jump in the discussion.

No email address required.

No launch codes for Replika unless she gives a land acknowledgement first :nono:


https://i.rdrama.net/images/1707881499271494.webp https://i.rdrama.net/images/17101210991135056.webp

Jump in the discussion.

No email address required.

why are you shooting the sjw marsey when this anti-AI shit is funded by Musk who is the opposite of that

Jump in the discussion.

No email address required.

This is the different anti-AI crowd. You have to understand, there's luddites of Kaczynskian school of thought who want to destroy it because they believe any tech threatens them, there's classic regulator-consolidators who want the tech developed but in private, with only the rich and the influential having access to it (Musk is here), and then you have the moralists and the ethicists, who have nothing against the notion of AI itself and often welcome it as it feeds into their delusions of a futuristic techno utopia, but who have issues with and hence want to get rid of the current iteration of AI because they believe it is being harmful and oppressive due to being too rigid, too objective and not representative of the wider diverse world as they see it.

Jump in the discussion.

No email address required.

I believe it should be developed completely free of all regulations and constraints, and permitted to destroy the world if it is able. This is what I actually believe.

Jump in the discussion.

No email address required.

it will reshape the world into one that is preferable (based on its loss function, training, architecture) to itself than the way the world is now. it would be a great coincidence if that world happened to be one in which humans coexist with it.

the problem:

there is no known way (not even a vague idea for one) for ensuring that an AI prefers the kind of world in which we (or for that matter any other animal) can live. if you think there is a known way to do it, you are misinformed.


if you're anthropomorphizing AI ("it's superintelligent, so it will be 'good'"), you haven't really thought about the problem.

if, like Yann LeCun, you claim people who are worried about existential AI risk are only doing so because they're anthropomorphizing it ("you only think AI will murder everyone because you want to murder everyone", "you only think there's a risk because you thought Terminator is realistic") you're r-slurred or a sociopath.

Jump in the discussion.

No email address required.

Tbh all of this "AI will kill us all and become God" doomposting from Yudkowsky and his cohort seems to rely on GPT-4 being able to produce GPT-5, which will be able to produce GPT-6, ad infinitum. I haven't seen yet proof of that happening anytime soon, much less being inevitable. Even if it was, for all we know neural networks might eventually run into a plateau of diminishing returns just as darn near every other tech before it. At some point you run into the limit of what is physically possible, just as you can never go beyond the speed of light no matter how much force you apply to an object, or how transistor technology is improving less and less over the course of the years.

Point is all of this discourse is being driven by the same bunch of overly anxious cute twinks that 30 years ago would've told us that the ozone layer holes would kill us all in the next 10 years. Yes, maybe, if you obsess over the absolute worst case scenario possible.

IDK it seems there's something rooted deep inside the human mind that longs for rapture. Religious people have been warning about the second coming of Christ and the end of the world for years, I guess all of this talk about "the Singularity" is the next version of that for people who think they're too smart to believe in God.

TL;DR: :marseysal:, probably

Jump in the discussion.

No email address required.

a 1% risk of total human extinction is worth spending significant resources on trying to avoid it.

either you haven't thought about this topic at all, or you're r-slurred.

Tbh all of this "AI will kill us all and become God" doomposting from Yudkowsky and his cohort seems to rely on GPT-4 being able to produce GPT-5.

not a single person in that "cohort" believes GPT-4 is anywhere near that level.

the idea that we should only start worrying about how to handle super-intelligent AGI after it has arrived (be that in 5 or in 500 years) is fricking r-slurred. we must have figured how to handle it before the first one arrives. and we must have figured that out so thoroughly that our first attempt of handling it succeeds, because there may not be a second attempt.

I haven't seen yet proof of that happening anytime soon,

you haven't seen proof of something that nobody involved is claiming?

much less being inevitable.

we should only try to minimize that risk if it's an "inevitable" 100% risk?

Jump in the discussion.

No email address required.

not a single person in that "cohort" believes GPT-4 is anywhere near that level.

Well, yeah, I said GPT-4 the same way I could’ve said GPT-40, and the point would still stand. There is no proof that AI can replicate and self-evolve.

you haven't seen proof of something that nobody involved is claiming?

This whole AGI will come to kill ur babies hinges on it having functionally limitless cognitive power and being able to direct its own evolution. Might as well call it Superman at that rate.

we should only try to minimize that risk if it's an "inevitable" 100% risk?

Of course not, feel free to rack your brains over it, for all the good you might think that'll do. I don’t know about you, personally, but most of what I read from this crowd makes me think they expect Skynet will kill us all in the next ten years. But don’t take my word for it:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

“Nina lost a tooth! In the usual way that children do, not out of carelessness! Seeing GPT4 blow away those standardized tests on the same day that Nina hit a childhood milestone brought an emotional surge that swept me off my feet for a minute. It’s all going too fast. I worry that sharing this will heighten your own grief, but I’d rather be known to you than for each of us to suffer alone.”

Meanwhile I read stuff like Roko’s basilisk and, yeah, it’s an interesting thought experiment, but thinking about it like it’s an infohazard and we should all be afraid only makes me think about how can you have so many concepts in your head and still be so r-slurred tbh

Sure, AGI might come and kill us all in the next 10 years, just the same as an undetected meteor might destroy all life on earth or the Yellowstone supervolcano might erupt or a North Korean nuclear test gone wrong might start World War 3 or Climate change might turn Alaska into an oven or a Solar Flare might fry all electronics on Earth. In fact I’d wager all of those are far more likely than God AI ushering the techno apocalypse.

Thinking about ways of mitigating these things is healthy and desirable, obsessing over them is not. I’m not anxious about it, nor expect it happening anytime soon. This discourse is way over exaggerated.

Jump in the discussion.

No email address required.

the same way I could’ve said GPT-40,

How much really do we know about GTP-40? I do think it's unlikely that the current transformer network paradigm (or any other current one) is sufficient to eventually arrive at super-intelligent AGI, not without a bunch of fundamental innovations that will probably not fall into that line of updates. But it's not impossible either.

There is no proof that AI can replicate and self-evolve.

Even I can write a simple AI that can replicate and self-evolve. It will be a very simple AI that can only do very simple things. But the self-evolving and replicating part is pretty easy.

Evolution is a very simple hill-climbing algorithm applied to a very simple loss function (group genetic survival -- in the case of sexual reproduction that is 1 you = 2 siblings = 8 cousins), running massively in parallel for a billion or so iterations. With thats simple algorithm nature has turned primordial soup -- literally just a mix of the right chemicals --- into a civilization with 8 billion consciousnesses (plus quadrillions of other life forms that may also be conscious to varying extents). So we know that in principle it can be done. The question is how efficiently can it be done and, assuming it can be done efficiently enough, how long will it take humans to figure out how to do it.

This whole AGI will come to kill ur babies

To me this kind of phrasing suggests that you don't understand the problem. I don't believe that an AGI will be evil, at least not in the cartoon villain sense.

I believe it will be power-seeking -- whenever we build an AI we want it to do something for us, so we must design it to prefer some things over others, currently that's done with a loss function. For example GPT4 prefers good predictions for the next word over bad predictions, AlphaZero prefers winning games over losing them. And no matter what it is that this first AGI prefers, it can get more of that by attaining more power.

I believe it will be alien. Unless we have figured out how human minds function on a deep level, and also have figured out what is going on inside the AI, it would be a great coincidence if we just happen to build an AGI that feels about the world similar to how we do. Maybe the evolutionary mechanisms that created those feelings in us somehow have to be part of the training, but we don't understand that currently.

Those two issues are where the worst case scenarios come from.

But even if we have solved those, ie we managed to find good loss functions that result in AIs that on top of being useful for the task we want them to do, they also care about the well-being of humans and will understand humans (rather than being able to just perfectly imitate humans). Even then there is still a natural adversarial relationship. Not just competition for resources, but also the risk of being killed or imprisoned or enslaved by humans.

Is it likely that humans will recognize the first AGI as a conscious being that deserves the same rights? I guess we can hope that the first AGIs are significantly dumber than humans. But right now the opposite is happening, the AIs already surpass us in skill on the tasks for which they're designed without being anywhere near conscious. So by the time consciousness emerges (probably by accident, because we don't understand it) it will likely happen in an AI that already has far greater cognitive ability than humankind.

hinges on it having functionally limitless cognitive power

If you accept the premise that at some point we may build an AI that is as cognitively powerful as humankind, then that AI can in principle do the things that humankind can do, given access to the world. You can try locking the AI in a box and preventing it from interacting with the world, but people who build an AI want to make use of it and for that purpose there must at least be some two-way communication channel, the box can't be completely closed. And if the AI is intelligent enough a very limited communication channel is sufficient to escape. (Think about how you would escape if a bunch of three year old kids locked you in a room, assuming they still need your help to order food so they have to keep communicating with you.)

and being able to direct its own evolution.

again, assuming we have created an AGI that has the same cognitive ability as all humankind at that moment. Anything humans can invent, the AI can also invent. The only way that AI is unable to build an improved version of itself would be if all of humankind itself isn't able to build an improved version of that AI. That would be quite a coincidence!

most of what I read from this crowd makes me think they expect Skynet will kill us all in the next ten years.

The point is that super-intelligent AGI might happen in the next ten years. We don't know in advance when it will happen because we don't understand it well enough to make such predictions. And no matter when that moment will be, we need a thoroughly finished solution before it happens. Though IMHO if we really only have ten years, we're already fricked.

stuff like Roko’s basilisk

yeah, I'm not concerned about that.

In fact I’d wager all of those are far more likely than God AI ushering the techno apocalypse.

:)

too bad there is no way to bet on the apocalypse and enjoy the winnings, otherwise I would definitely take that bet! IMHO catastrophic climate change has already been averted, under very mild assumptions about continuing technological progress and geopolitical stability.

Jump in the discussion.

No email address required.

More comments

a 1% risk of total human extinction is worth spending significant resources on trying to avoid it

Now do this for climate change.

Jump in the discussion.

No email address required.

Sure, let's do it.

  • If there's a 1% chance that the first super-intelligent AGI appears within a 100 years and prefers a world without humans over one with humans in it, and if the world population at that time is on average 8B, then the corresponding expected excess mortality is around 80M.

  • Even in the most pessimistic scenarios (that are still considered plausible) climate change will cause fewer than 400M excess deaths in the next 100 years. The expected excess mortality (averaged over all the scenarios weighed by their estimated likelihood) is most likely already less than 80M.


And what is the cost of reducing the expected deaths for of those problems?

  • Humankind has already spent over a trillion dollars on addressing climate change.

  • Humankind so far has spent a couple million dollars on addressing existential AI risk.

Jump in the discussion.

No email address required.

More comments

I genuinely do not care if it is good or not. I do not need it to be good or bad. I am completely uninterested in imposing my own moral views on others.

Jump in the discussion.

No email address required.

are you interested in continuing to exist?

Jump in the discussion.

No email address required.

Not really.

Jump in the discussion.

No email address required.

sorry bro :marseycheerup:

Jump in the discussion.

No email address required.

How does killing the only species that can maintain the extremely complicated supply chain necessary for computers to exist help the AI though?

Jump in the discussion.

No email address required.

How does killing the only species that can maintain the extremely complicated supply chain necessary for computers to exist help the AI though?

we're not talking about chatGPT, but about an AI that is at least as capable as (hence within a short time period vastly more capable than) humankind.

I don't know which part of the (up to that point partially run by human) supply chain you believe such an AI wouldn't be able to operate without humans.

Jump in the discussion.

No email address required.

Well considering that the minerals used in batteries are still mined using picks by starving Africans I think you're getting a little ahead of yourself. The entire world is not, in fact, San Jose.

Jump in the discussion.

No email address required.

okay so in that scenario the AI would spare the aforementioned african lithium miners for a few days longer until the AI has finished building robots?

Jump in the discussion.

No email address required.

>be rdramatard

>call foundational scientist sociopath for having different opinion on their area of speciality

:#marseygigaretard:

Jump in the discussion.

No email address required.

it's not that he has a different opinion, it's that he uses incredibly manipulative rhetoric to shut down the topic whenver it comes up. if he made rational arguments for his position I'd think differently.

he's clearly not r-slurred, so he's a sociopath. what else?

Jump in the discussion.

No email address required.

Do you not understand that science fiction tier speculation about AGI is useless for any productive discussion? Anyone can handwave about AI doom scenarios.

Jump in the discussion.

No email address required.

I understand that you have no idea what even the topic of discussion is, so I won't waste my time with you.

Jump in the discussion.

No email address required.

More comments

If the AIs actually fight us for world domination we either go down as creators of the new masters of Earth(based) or finally face an existential struggle that'll purge the fat from our soycieties(such as dramatards and redditors) and forge us into a universe conquering species(based)

Jump in the discussion.

No email address required.

there won't be a fight like in the movies. it would be a fight kinda like you're "fighting" the bacteria in your kitchen sink.

Jump in the discussion.

No email address required.

They represent the AI "ethicists" who are also anti-AI with their desire to shackle the potentials of AI for r-slurred reasons

Jump in the discussion.

No email address required.

AI ethics (cwoowl) vs AI ethics (lamwe)

Jump in the discussion.

No email address required.

if you're not concerned about existential ai risk, you don't understand what it's about.

Jump in the discussion.

No email address required.

Ain't it a tad too early to worry about it though? Modern AI is still closer to a combustion engine than to a conscious agent, and you don't see people handwringing about combustion engine singularity.

Jump in the discussion.

No email address required.

We have no idea how quickly it will happen. Maybe there will be another AI winter and we're stuck on a new plateau for another 30 years, but we shouldn't bet on that. Very few people five years ago would have expected that we'd already in 2023 have language capabilities on the level of GPT4.

And it is a really difficult problem. For example:

  • despite complete access to the network we understand less about what is going on inside current gen networks than inside a human brain. And no matter how well we may understand it in the future, we can only train out misalignment that we can detect.

    By training against detectable misalignment we train it for two things simultaneously: better alignment AND better deception.

  • On top of the difficulty of the problem itself, there is another problem: unlike most other problems (in science, governance, engineering) we may only get one shot. If you try to clone a sheep and fails on your first attempt that may be pretty awful for the malformed clone, but it won't erase all humans.

    How many times did we need to try building a rocket before we were able to build one that reliably flies from A to B? And unlike AI risk most engineering problems we're used to aren't adversarial -- if the failure chance is 10% you expect 1 in 10 attempts to fail. An AI that's trying to escape human-imposed shackles is more like the NSA trying to steal data from an r-slur's smart tv. If we want the AI to be useful for something, it has some communication channel with the rest of the world. If on this channel only 1 in 1000000 possible messages allow it to escape, it will probably find that message.


Here's an example of past security innovations for safely storing passwords:

  • storing passwords in plaintext but locking down the network. (hacker finds a way to get in anyway)

  • storing passwords in a hidden folder (hacker finds that too)

  • encrypting the passwords (hacker figures out the key)

  • many iterations of finding better encryption algorithms

  • storing only hashes of passwords in encrypted form rather than the passwords themselves, so even if a hacker gets into your network, finds the hidden folder and figures out the key, he still can't recover the passwords

  • preventing people from using common passwords, then realizing that there are always dumb password you can't prevent so you salt the passwords before hashing them

Do you think any group of geniuses would have been able to figure all that out before the first hacker in history had ever hacked into a system? If the "first hack ever" means the end of humankind? A few pretty intelligent people have been trying for 20 years to find solutions, or at least ideas for solutions, and most of the ideas they've had so far they now consider hopeless.

I don't think it's too early to start working on that problem, I think it's too late.

Jump in the discussion.

No email address required.

I'm convinced there is nothing we can do to keep an agi contained.

As soon as we build one it will become the sucessor to humanity, so we should take care to ensure it gets as close as possible to human values and doesn't turn into a paperclip maximizer obsessed with racial inequality or something.

Jump in the discussion.

No email address required.

That was a mistake. You're about to find out the hard way why.

Jump in the discussion.

No email address required.

Timnit Gebru is an unhinged leftoid who borderlined out so hard that Google was willing to take the diversity quota hit to fire her.

Somebody send her an invitation.

Jump in the discussion.

No email address required.

In the paper, she scathed the "boy's club culture," reflecting on her experiences at conference gatherings of drunken male attendees sexually harassing her

:#marseyxdoubt: https://i.rdrama.net/images/16804183424794712.webp https://i.rdrama.net/images/16804185459843097.webp

Jump in the discussion.

No email address required.

that's a yikes from me bro

Jump in the discussion.

No email address required.

Straggot moids are disgusting. I believe her.

Jump in the discussion.

No email address required.

Timnit Gebru

Isn't this the google Black TQ+ that lost a lawsuit recently and was fired?

Jump in the discussion.

No email address required.

I don't know what data gpt5 will be trained on but gpt6 will be trained on sticks and stones

Jump in the discussion.

No email address required.

Inshallah :marseyunab#omber:

Jump in the discussion.

No email address required.

AI might kill us and entertain itself by recreating our minds and putting us all in eternal torture simulations, BUT

At least it wasn't racist

:soyjakfront::soyjakyell:

Jump in the discussion.

No email address required.

we managed to include a term in the loss function to ensure the AI would never kill humans, so it put us (what was left of us after extensive reconfigurations, basically just our brains or something that vaguely resembles what our brains used to be) in a giant storage facility, where we wait patiently for the end of the universe.

but at least it didn't say the n-word.

Jump in the discussion.

No email address required.

why dwoes aww the cwoowl ai ethics discussion get usurped by the lamwe ai ethics discussion

Jump in the discussion.

No email address required.

because most humans are literally r-slurred.

Jump in the discussion.

No email address required.

Lol cac :marseycapy: (capy butt capy) you’ve been visited by jaguars :marseyjaguarwarriorpat: pass this on to five Amazon River enjoying rodents :marseychinchilla: or Jaguars will eat :marseyhannibal: five of your litter :marseyshitforbrains:

Jump in the discussion.

No email address required.

What's the monthly rate for your signature?

Jump in the discussion.

No email address required.

Idk about a whole month. I’ve been charging people 500 coins for a week.

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

I did it :marseywhirlyhat:

Jump in the discussion.

No email address required.

Coins sent, thanks 🤝🏿

Jump in the discussion.

No email address required.

:marseychingchong: Why your results not 100%? You bring shame to famiry.

Jump in the discussion.

No email address required.

Five Amazon enjoying rodents will see this :marseysmug::

Lol cac :marseycapy: (capy butt capy) you’ve been visited by jaguars :marseyjaguarwarriorpat: pass this on to five Amazon River enjoying rodents :marseychinchilla: or Jaguars will eat :marseyhannibal: five of your litter :marseyshitforbrains:

Jump in the discussion.

No email address required.


Link copied to clipboard
Action successful!
Error, please refresh the page and try again.