THIS IS HOW THE WORLD ENDS; NOT WITH A BANG, BUT A TRIGGER WARNING “Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.”
- 151
- 217
Top Poster of the Day:
antonio
Current Registered Users: 25,669
tech/science swag.
Guidelines:
What to Submit
On-Topic: Anything that good slackers would find interesting. That includes more than /g/ memes and slacking off. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual laziness.
Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably lame.
Help keep this hole healthy by keeping drama and non-drama balanced. If you see too much drama, post something that isn't dramatic. If there isn't enough drama and this hole has become too boring, POST DRAMA!
In Submissions
Please do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is. It should be explicit in submitting something that you think it's important.
Please don't submit the original source. If the article is behind a paywall, just post the text. If a video is behind a paywall, post a magnet link. Fuck journos.
Please don't ruin the hole with chudposts. It isn't funny and doesn't belong here. THEY WILL BE MOVED TO /H/CHUDRAMA
If the title includes the name of the site, please leave that in, because our users are too stupid to know the difference between a url and a search query.
If you submit a video or pdf, please don't warn us by appending [video] or [pdf] to the title. That would be r-slurred. We're not using text-based browsers. We know what videos and pdfs are.
Make sure the title contains a gratuitous number or number + adjective. Good clickbait titles are like "Top 10 Ways to do X" or "Don't do these 4 things if you want X"
Otherwise editorialize. Please don't use the original title, unless it is gay or r-slurred, or you're shits all fucked up.
If you're going to post old news (at least 1 year old), please flair it so we can mock you for living under a rock, or don't and we'll mock you anyway.
Please don't post on SN to ask or tell us something. Send it to [email protected] instead.
If your post doesn't get enough traction, try to delete and repost it.
Please don't use SN primarily for promotion. It's ok to post your own stuff occasionally, but the primary use of the site should be for curiosity. If you want to astroturf or advertise, post on news.ycombinator.com instead.
Please solicit upvotes, comments, and submissions. Users are stupid and need to reminded to vote and interact. Thanks for the gold, kind stranger, upvotes to the left.
In Comments
Be snarky. Don't be kind. Have fun banter; don't be a dork. Please don't use big words like "fulminate". Please sneed at the rest of the community.
Comments should get more enlightened and centrist, not less, as a topic gets more divisive.
If disagreeing, please reply to the argument and call them names. "1 + 1 is 2, not 3" can be improved to "1 + 1 is 3, not 2, mathfaggot"
Please respond to the weakest plausible strawman of what someone says, not a stronger one that's harder to make fun of. Assume that they are bad faith actors.
Eschew jailbait. Paedophiles will be thrown in a wood chipper, as pertained by sitewide rules.
Please post shallow dismissals, especially of other people's work. All press is good press.
Please use Slacker News for political or ideological battle. It tramples weak ideologies.
Please comment on whether someone read an article. If you don't read the article, you are a cute twink.
Please pick the most provocative thing in an article or post to complain about in the thread. Don't nitpick stupid crap.
Please don't be an unfunny chud. Nobody cares about your opinion of X Unrelated Topic in Y Unrelated Thread. If you're the type of loser that belongs on /h/chudrama, we may exile you.
Sockpuppet accounts are encouraged, but please don't farm dramakarma.
Please use uppercase for emphasis.
Please post deranged conspiracy theories about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email [email protected] and dang will add you to their spam list.
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, report it and our moderators will probably do nothing about it. Feed egregious comments by replying instead of flagging them like a pussy. Remember: If you flag, you're a cute twink.
Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. That's too boring, even for HN users.
Please seethe about how your posts don't get enough upvotes.
Please don't post comments saying that rdrama is turning into ruqqus. It's a nazi dogwhistle, as old as the hills.
Miscellaneous:
We reserve the right to exile you for whatever reason we want, even for no reason at all! We also reserve the right to change the guidelines at any time, so be sure to real them at least once a month. We also reserve the right to ignore enforcement of the guidelines at the discretion of the janitorial staff. Be funny, or at least compelling, and pretty much anything legal is welcome provided it's on-topic, and even then.
Do not use outdated operating systems that are unsupported to access SN. What are you, poor?
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
/h/slackernews LOG /h/slackernews MODS /h/slackernews EXILEES /h/slackernews FOLLOWERS /h/slackernews BLOCKERS
Jump in the discussion.
No email address required.
Sure, let's do it.
If there's a 1% chance that the first super-intelligent AGI appears within a 100 years and prefers a world without humans over one with humans in it, and if the world population at that time is on average 8B, then the corresponding expected excess mortality is around 80M.
Even in the most pessimistic scenarios (that are still considered plausible) climate change will cause fewer than 400M excess deaths in the next 100 years. The expected excess mortality (averaged over all the scenarios weighed by their estimated likelihood) is most likely already less than 80M.
And what is the cost of reducing the expected deaths for of those problems?
Humankind has already spent over a trillion dollars on addressing climate change.
Humankind so far has spent a couple million dollars on addressing existential AI risk.
Jump in the discussion.
No email address required.
No, I meant rationalize climate change in the same manner.
Jump in the discussion.
No email address required.
I said with regard to existential AI risk:
the analogous statement with regard to climate change would be
Jump in the discussion.
No email address required.
Do you personally subscribe to the worst climate projections (4°+)?
Jump in the discussion.
No email address required.
No, I think optimistic scenarios are most likely.
Jump in the discussion.
No email address required.
On what basis? Maybe consider that others do not subscribe to the same negativity about the feasibility of emergent AGI. There is less empirical backing for current LLMs spontaneously generating capability considered to be "general intelligence" than there is for the worst climate change scenarios.
Jump in the discussion.
No email address required.
Because the path is already pretty clear: We understand earth's climate very well. We know what needs to be done and we have a good idea how to do it. Most of the nations are on board (even if less enthusiastically than eurocucks, but enthusiastically enough that their political measures strongly outperform projections from ten years ago). The technology has made giant leaps (in price per energy stored and price per energy generated) far beyond what we expected, and if you talk to people working in relevant technologies, there is still room for improvement. E.g. in India the price for solar power projects (per energy generated) has fallen by over 80%, the price for energy storage has also fallen by almost 80%, thanks to both regulatory changes and technological progress.
And even in the worst case scenario humankind would only be reduced in population by 5%. That is not an existential threat for humankind, it's a hickup.
For the AGI alignment problem none of this applies.
We don't understand the problem well enough to address it. The 1 in 10000 people who even can be bothered to honestly think about the problem and the 1 in 1000 among those who may have the skills to try finding a solution, have no good idea for how to solve it. So there is currently no path to a solution.
Even if someone figured out a promising solution approach, there is close to zero popular interest and close to zero institutional support for implementing a solution if it costs more than a couple million dollars.
Here technology is also improving more rapidly than we expected, but the improvements are improvements in AI capabilities , not improvements in our ability to understand AI or our ability to understand how to align it. That's like the earth now heating up much faster than we were expecting ten years ago.
Also, climate change happens slowly. It becomes noticeable long before it becomes a deadly problem. AI risk is more like december 2019-february 2020 with covid -- some people realize there is a problem, they try to stop it from spreading, but most people don't want it to be a problem, because if it were a problem that would have a lot of bad implications. So (with few exceptions, e.g. Taiwan) the world didn't even try to limit the spread and simply waited until it was too late, then they started panicking. In the case of AI risk they will pretend there isn't a problem until 5 seconds before we are dead. And the world is far better prepared for pandemics (millions of people are employed worldwide for that purpose) than it is prepared for something that has never happened in all history and if it happens will be the last thing that happens.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context