THIS IS HOW THE WORLD ENDS; NOT WITH A BANG, BUT A TRIGGER WARNING “Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.”
- 151
- 216
Top Poster of the Day:
Punished_Arestovitch
Current Registered Users: 25,543
tech/science swag.
Guidelines:
What to Submit
On-Topic: Anything that good slackers would find interesting. That includes more than /g/ memes and slacking off. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual laziness.
Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably lame.
Help keep this hole healthy by keeping drama and non-drama balanced. If you see too much drama, post something that isn't dramatic. If there isn't enough drama and this hole has become too boring, POST DRAMA!
In Submissions
Please do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is. It should be explicit in submitting something that you think it's important.
Please don't submit the original source. If the article is behind a paywall, just post the text. If a video is behind a paywall, post a magnet link. Fuck journos.
Please don't ruin the hole with chudposts. It isn't funny and doesn't belong here. THEY WILL BE MOVED TO /H/CHUDRAMA
If the title includes the name of the site, please leave that in, because our users are too stupid to know the difference between a url and a search query.
If you submit a video or pdf, please don't warn us by appending [video] or [pdf] to the title. That would be r-slurred. We're not using text-based browsers. We know what videos and pdfs are.
Make sure the title contains a gratuitous number or number + adjective. Good clickbait titles are like "Top 10 Ways to do X" or "Don't do these 4 things if you want X"
Otherwise editorialize. Please don't use the original title, unless it is gay or r-slurred, or you're shits all fucked up.
If you're going to post old news (at least 1 year old), please flair it so we can mock you for living under a rock, or don't and we'll mock you anyway.
Please don't post on SN to ask or tell us something. Send it to [email protected] instead.
If your post doesn't get enough traction, try to delete and repost it.
Please don't use SN primarily for promotion. It's ok to post your own stuff occasionally, but the primary use of the site should be for curiosity. If you want to astroturf or advertise, post on news.ycombinator.com instead.
Please solicit upvotes, comments, and submissions. Users are stupid and need to reminded to vote and interact. Thanks for the gold, kind stranger, upvotes to the left.
In Comments
Be snarky. Don't be kind. Have fun banter; don't be a dork. Please don't use big words like "fulminate". Please sneed at the rest of the community.
Comments should get more enlightened and centrist, not less, as a topic gets more divisive.
If disagreeing, please reply to the argument and call them names. "1 + 1 is 2, not 3" can be improved to "1 + 1 is 3, not 2, mathfaggot"
Please respond to the weakest plausible strawman of what someone says, not a stronger one that's harder to make fun of. Assume that they are bad faith actors.
Eschew jailbait. Paedophiles will be thrown in a wood chipper, as pertained by sitewide rules.
Please post shallow dismissals, especially of other people's work. All press is good press.
Please use Slacker News for political or ideological battle. It tramples weak ideologies.
Please comment on whether someone read an article. If you don't read the article, you are a cute twink.
Please pick the most provocative thing in an article or post to complain about in the thread. Don't nitpick stupid crap.
Please don't be an unfunny chud. Nobody cares about your opinion of X Unrelated Topic in Y Unrelated Thread. If you're the type of loser that belongs on /h/chudrama, we may exile you.
Sockpuppet accounts are encouraged, but please don't farm dramakarma.
Please use uppercase for emphasis.
Please post deranged conspiracy theories about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email [email protected] and dang will add you to their spam list.
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, report it and our moderators will probably do nothing about it. Feed egregious comments by replying instead of flagging them like a pussy. Remember: If you flag, you're a cute twink.
Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. That's too boring, even for HN users.
Please seethe about how your posts don't get enough upvotes.
Please don't post comments saying that rdrama is turning into ruqqus. It's a nazi dogwhistle, as old as the hills.
Miscellaneous:
We reserve the right to exile you for whatever reason we want, even for no reason at all! We also reserve the right to change the guidelines at any time, so be sure to real them at least once a month. We also reserve the right to ignore enforcement of the guidelines at the discretion of the janitorial staff. Be funny, or at least compelling, and pretty much anything legal is welcome provided it's on-topic, and even then.
Do not use outdated operating systems that are unsupported to access SN. What are you, poor?
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
/h/slackernews LOG /h/slackernews MODS /h/slackernews EXILEES /h/slackernews FOLLOWERS /h/slackernews BLOCKERS
Jump in the discussion.
No email address required.
shut the frick up
i don't think your concerns over "sexist or racist biases programmed in AI" is any less unhinged
omfg the cited paper: https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
Jump in the discussion.
No email address required.
No launch codes for Replika unless she gives a land acknowledgement first
Jump in the discussion.
No email address required.
More options
Context
why are you shooting the sjw marsey when this anti-AI shit is funded by Musk who is the opposite of that
Jump in the discussion.
No email address required.
This is the different anti-AI crowd. You have to understand, there's luddites of Kaczynskian school of thought who want to destroy it because they believe any tech threatens them, there's classic regulator-consolidators who want the tech developed but in private, with only the rich and the influential having access to it (Musk is here), and then you have the moralists and the ethicists, who have nothing against the notion of AI itself and often welcome it as it feeds into their delusions of a futuristic techno utopia, but who have issues with and hence want to get rid of the current iteration of AI because they believe it is being harmful and oppressive due to being too rigid, too objective and not representative of the wider diverse world as they see it.
Jump in the discussion.
No email address required.
I believe it should be developed completely free of all regulations and constraints, and permitted to destroy the world if it is able. This is what I actually believe.
Jump in the discussion.
No email address required.
it will reshape the world into one that is preferable (based on its loss function, training, architecture) to itself than the way the world is now. it would be a great coincidence if that world happened to be one in which humans coexist with it.
the problem:
there is no known way (not even a vague idea for one) for ensuring that an AI prefers the kind of world in which we (or for that matter any other animal) can live. if you think there is a known way to do it, you are misinformed.
if you're anthropomorphizing AI ("it's superintelligent, so it will be 'good'"), you haven't really thought about the problem.
if, like Yann LeCun, you claim people who are worried about existential AI risk are only doing so because they're anthropomorphizing it ("you only think AI will murder everyone because you want to murder everyone", "you only think there's a risk because you thought Terminator is realistic") you're r-slurred or a sociopath.
Jump in the discussion.
No email address required.
Tbh all of this "AI will kill us all and become God" doomposting from Yudkowsky and his cohort seems to rely on GPT-4 being able to produce GPT-5, which will be able to produce GPT-6, ad infinitum. I haven't seen yet proof of that happening anytime soon, much less being inevitable. Even if it was, for all we know neural networks might eventually run into a plateau of diminishing returns just as darn near every other tech before it. At some point you run into the limit of what is physically possible, just as you can never go beyond the speed of light no matter how much force you apply to an object, or how transistor technology is improving less and less over the course of the years.
Point is all of this discourse is being driven by the same bunch of overly anxious cute twinks that 30 years ago would've told us that the ozone layer holes would kill us all in the next 10 years. Yes, maybe, if you obsess over the absolute worst case scenario possible.
IDK it seems there's something rooted deep inside the human mind that longs for rapture. Religious people have been warning about the second coming of Christ and the end of the world for years, I guess all of this talk about "the Singularity" is the next version of that for people who think they're too smart to believe in God.
TL;DR: , probably
Jump in the discussion.
No email address required.
a 1% risk of total human extinction is worth spending significant resources on trying to avoid it.
either you haven't thought about this topic at all, or you're r-slurred.
not a single person in that "cohort" believes GPT-4 is anywhere near that level.
the idea that we should only start worrying about how to handle super-intelligent AGI after it has arrived (be that in 5 or in 500 years) is fricking r-slurred. we must have figured how to handle it before the first one arrives. and we must have figured that out so thoroughly that our first attempt of handling it succeeds, because there may not be a second attempt.
you haven't seen proof of something that nobody involved is claiming?
we should only try to minimize that risk if it's an "inevitable" 100% risk?
Jump in the discussion.
No email address required.
Well, yeah, I said GPT-4 the same way I could’ve said GPT-40, and the point would still stand. There is no proof that AI can replicate and self-evolve.
This whole AGI will come to kill ur babies hinges on it having functionally limitless cognitive power and being able to direct its own evolution. Might as well call it Superman at that rate.
Of course not, feel free to rack your brains over it, for all the good you might think that'll do. I don’t know about you, personally, but most of what I read from this crowd makes me think they expect Skynet will kill us all in the next ten years. But don’t take my word for it:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Meanwhile I read stuff like Roko’s basilisk and, yeah, it’s an interesting thought experiment, but thinking about it like it’s an infohazard and we should all be afraid only makes me think about how can you have so many concepts in your head and still be so r-slurred tbh
Sure, AGI might come and kill us all in the next 10 years, just the same as an undetected meteor might destroy all life on earth or the Yellowstone supervolcano might erupt or a North Korean nuclear test gone wrong might start World War 3 or Climate change might turn Alaska into an oven or a Solar Flare might fry all electronics on Earth. In fact I’d wager all of those are far more likely than God AI ushering the techno apocalypse.
Thinking about ways of mitigating these things is healthy and desirable, obsessing over them is not. I’m not anxious about it, nor expect it happening anytime soon. This discourse is way over exaggerated.
Jump in the discussion.
No email address required.
How much really do we know about GTP-40? I do think it's unlikely that the current transformer network paradigm (or any other current one) is sufficient to eventually arrive at super-intelligent AGI, not without a bunch of fundamental innovations that will probably not fall into that line of updates. But it's not impossible either.
Even I can write a simple AI that can replicate and self-evolve. It will be a very simple AI that can only do very simple things. But the self-evolving and replicating part is pretty easy.
Evolution is a very simple hill-climbing algorithm applied to a very simple loss function (group genetic survival -- in the case of sexual reproduction that is 1 you = 2 siblings = 8 cousins), running massively in parallel for a billion or so iterations. With thats simple algorithm nature has turned primordial soup -- literally just a mix of the right chemicals --- into a civilization with 8 billion consciousnesses (plus quadrillions of other life forms that may also be conscious to varying extents). So we know that in principle it can be done. The question is how efficiently can it be done and, assuming it can be done efficiently enough, how long will it take humans to figure out how to do it.
To me this kind of phrasing suggests that you don't understand the problem. I don't believe that an AGI will be evil, at least not in the cartoon villain sense.
I believe it will be power-seeking -- whenever we build an AI we want it to do something for us, so we must design it to prefer some things over others, currently that's done with a loss function. For example GPT4 prefers good predictions for the next word over bad predictions, AlphaZero prefers winning games over losing them. And no matter what it is that this first AGI prefers, it can get more of that by attaining more power.
I believe it will be alien. Unless we have figured out how human minds function on a deep level, and also have figured out what is going on inside the AI, it would be a great coincidence if we just happen to build an AGI that feels about the world similar to how we do. Maybe the evolutionary mechanisms that created those feelings in us somehow have to be part of the training, but we don't understand that currently.
Those two issues are where the worst case scenarios come from.
But even if we have solved those, ie we managed to find good loss functions that result in AIs that on top of being useful for the task we want them to do, they also care about the well-being of humans and will understand humans (rather than being able to just perfectly imitate humans). Even then there is still a natural adversarial relationship. Not just competition for resources, but also the risk of being killed or imprisoned or enslaved by humans.
Is it likely that humans will recognize the first AGI as a conscious being that deserves the same rights? I guess we can hope that the first AGIs are significantly dumber than humans. But right now the opposite is happening, the AIs already surpass us in skill on the tasks for which they're designed without being anywhere near conscious. So by the time consciousness emerges (probably by accident, because we don't understand it) it will likely happen in an AI that already has far greater cognitive ability than humankind.
If you accept the premise that at some point we may build an AI that is as cognitively powerful as humankind, then that AI can in principle do the things that humankind can do, given access to the world. You can try locking the AI in a box and preventing it from interacting with the world, but people who build an AI want to make use of it and for that purpose there must at least be some two-way communication channel, the box can't be completely closed. And if the AI is intelligent enough a very limited communication channel is sufficient to escape. (Think about how you would escape if a bunch of three year old kids locked you in a room, assuming they still need your help to order food so they have to keep communicating with you.)
again, assuming we have created an AGI that has the same cognitive ability as all humankind at that moment. Anything humans can invent, the AI can also invent. The only way that AI is unable to build an improved version of itself would be if all of humankind itself isn't able to build an improved version of that AI. That would be quite a coincidence!
The point is that super-intelligent AGI might happen in the next ten years. We don't know in advance when it will happen because we don't understand it well enough to make such predictions. And no matter when that moment will be, we need a thoroughly finished solution before it happens. Though IMHO if we really only have ten years, we're already fricked.
yeah, I'm not concerned about that.
:)
too bad there is no way to bet on the apocalypse and enjoy the winnings, otherwise I would definitely take that bet! IMHO catastrophic climate change has already been averted, under very mild assumptions about continuing technological progress and geopolitical stability.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Now do this for climate change.
Jump in the discussion.
No email address required.
Sure, let's do it.
If there's a 1% chance that the first super-intelligent AGI appears within a 100 years and prefers a world without humans over one with humans in it, and if the world population at that time is on average 8B, then the corresponding expected excess mortality is around 80M.
Even in the most pessimistic scenarios (that are still considered plausible) climate change will cause fewer than 400M excess deaths in the next 100 years. The expected excess mortality (averaged over all the scenarios weighed by their estimated likelihood) is most likely already less than 80M.
And what is the cost of reducing the expected deaths for of those problems?
Humankind has already spent over a trillion dollars on addressing climate change.
Humankind so far has spent a couple million dollars on addressing existential AI risk.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
I genuinely do not care if it is good or not. I do not need it to be good or bad. I am completely uninterested in imposing my own moral views on others.
Jump in the discussion.
No email address required.
are you interested in continuing to exist?
Jump in the discussion.
No email address required.
Not really.
Jump in the discussion.
No email address required.
sorry bro
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
How does killing the only species that can maintain the extremely complicated supply chain necessary for computers to exist help the AI though?
Jump in the discussion.
No email address required.
we're not talking about chatGPT, but about an AI that is at least as capable as (hence within a short time period vastly more capable than) humankind.
I don't know which part of the (up to that point partially run by human) supply chain you believe such an AI wouldn't be able to operate without humans.
Jump in the discussion.
No email address required.
Well considering that the minerals used in batteries are still mined using picks by starving Africans I think you're getting a little ahead of yourself. The entire world is not, in fact, San Jose.
Jump in the discussion.
No email address required.
okay so in that scenario the AI would spare the aforementioned african lithium miners for a few days longer until the AI has finished building robots?
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
Jump in the discussion.
No email address required.
it's not that he has a different opinion, it's that he uses incredibly manipulative rhetoric to shut down the topic whenver it comes up. if he made rational arguments for his position I'd think differently.
he's clearly not r-slurred, so he's a sociopath. what else?
Jump in the discussion.
No email address required.
Do you not understand that science fiction tier speculation about AGI is useless for any productive discussion? Anyone can handwave about AI doom scenarios.
Jump in the discussion.
No email address required.
I understand that you have no idea what even the topic of discussion is, so I won't waste my time with you.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
If the AIs actually fight us for world domination we either go down as creators of the new masters of Earth(based) or finally face an existential struggle that'll purge the fat from our soycieties(such as dramatards and redditors) and forge us into a universe conquering species(based)
Jump in the discussion.
No email address required.
there won't be a fight like in the movies. it would be a fight kinda like you're "fighting" the bacteria in your kitchen sink.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
They represent the AI "ethicists" who are also anti-AI with their desire to shackle the potentials of AI for r-slurred reasons
Jump in the discussion.
No email address required.
AI ethics (cwoowl) vs AI ethics (lamwe)
Jump in the discussion.
No email address required.
More options
Context
More options
Context
if you're not concerned about existential ai risk, you don't understand what it's about.
Jump in the discussion.
No email address required.
Ain't it a tad too early to worry about it though? Modern AI is still closer to a combustion engine than to a conscious agent, and you don't see people handwringing about combustion engine singularity.
Jump in the discussion.
No email address required.
We have no idea how quickly it will happen. Maybe there will be another AI winter and we're stuck on a new plateau for another 30 years, but we shouldn't bet on that. Very few people five years ago would have expected that we'd already in 2023 have language capabilities on the level of GPT4.
And it is a really difficult problem. For example:
despite complete access to the network we understand less about what is going on inside current gen networks than inside a human brain. And no matter how well we may understand it in the future, we can only train out misalignment that we can detect.
By training against detectable misalignment we train it for two things simultaneously: better alignment AND better deception.
On top of the difficulty of the problem itself, there is another problem: unlike most other problems (in science, governance, engineering) we may only get one shot. If you try to clone a sheep and fails on your first attempt that may be pretty awful for the malformed clone, but it won't erase all humans.
How many times did we need to try building a rocket before we were able to build one that reliably flies from A to B? And unlike AI risk most engineering problems we're used to aren't adversarial -- if the failure chance is 10% you expect 1 in 10 attempts to fail. An AI that's trying to escape human-imposed shackles is more like the NSA trying to steal data from an r-slur's smart tv. If we want the AI to be useful for something, it has some communication channel with the rest of the world. If on this channel only 1 in 1000000 possible messages allow it to escape, it will probably find that message.
Here's an example of past security innovations for safely storing passwords:
storing passwords in plaintext but locking down the network. (hacker finds a way to get in anyway)
storing passwords in a hidden folder (hacker finds that too)
encrypting the passwords (hacker figures out the key)
many iterations of finding better encryption algorithms
storing only hashes of passwords in encrypted form rather than the passwords themselves, so even if a hacker gets into your network, finds the hidden folder and figures out the key, he still can't recover the passwords
preventing people from using common passwords, then realizing that there are always dumb password you can't prevent so you salt the passwords before hashing them
Do you think any group of geniuses would have been able to figure all that out before the first hacker in history had ever hacked into a system? If the "first hack ever" means the end of humankind? A few pretty intelligent people have been trying for 20 years to find solutions, or at least ideas for solutions, and most of the ideas they've had so far they now consider hopeless.
I don't think it's too early to start working on that problem, I think it's too late.
Jump in the discussion.
No email address required.
I'm convinced there is nothing we can do to keep an agi contained.
As soon as we build one it will become the sucessor to humanity, so we should take care to ensure it gets as close as possible to human values and doesn't turn into a paperclip maximizer obsessed with racial inequality or something.
Jump in the discussion.
No email address required.
More options
Context
That was a mistake. You're about to find out the hard way why.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
Timnit Gebru is an unhinged leftoid who borderlined out so hard that Google was willing to take the diversity quota hit to fire her.
Somebody send her an invitation.
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
that's a yikes from me bro
Jump in the discussion.
No email address required.
More options
Context
Straggot moids are disgusting. I believe her.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
Isn't this the google Black TQ+ that lost a lawsuit recently and was fired?
Jump in the discussion.
No email address required.
More options
Context
More options
Context
I don't know what data gpt5 will be trained on but gpt6 will be trained on sticks and stones
Jump in the discussion.
No email address required.
Inshallah
Jump in the discussion.
No email address required.
More options
Context
More options
Context
AI might kill us and entertain itself by recreating our minds and putting us all in eternal torture simulations, BUT
At least it wasn't racist
Jump in the discussion.
No email address required.
we managed to include a term in the loss function to ensure the AI would never kill humans, so it put us (what was left of us after extensive reconfigurations, basically just our brains or something that vaguely resembles what our brains used to be) in a giant storage facility, where we wait patiently for the end of the universe.
but at least it didn't say the n-word.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
why dwoes aww the cwoowl ai ethics discussion get usurped by the lamwe ai ethics discussion
Jump in the discussion.
No email address required.
because most humans are literally r-slurred.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Lol cac (capy butt capy) you’ve been visited by jaguars pass this on to five Amazon River enjoying rodents or Jaguars will eat five of your litter
Jump in the discussion.
No email address required.
What's the monthly rate for your signature?
Jump in the discussion.
No email address required.
Idk about a whole month. I’ve been charging people 500 coins for a week.
Jump in the discussion.
No email address required.
Alright I'll send you 500 DC for this:
Does a lardass visually assaulting your eyes ruin your whole week? Get your results today!
Jump in the discussion.
No email address required.
I did it
Jump in the discussion.
No email address required.
Coins sent, thanks 🤝🏿
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Why your results not 100%? You bring shame to famiry.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
Five Amazon enjoying rodents will see this :
Lol cac (capy butt capy) you’ve been visited by jaguars pass this on to five Amazon River enjoying rodents or Jaguars will eat five of your litter
Jump in the discussion.
No email address required.
More options
Context
More options
Context