Jump in the discussion.

No email address required.

Sure, let's do it.

  • If there's a 1% chance that the first super-intelligent AGI appears within a 100 years and prefers a world without humans over one with humans in it, and if the world population at that time is on average 8B, then the corresponding expected excess mortality is around 80M.

  • Even in the most pessimistic scenarios (that are still considered plausible) climate change will cause fewer than 400M excess deaths in the next 100 years. The expected excess mortality (averaged over all the scenarios weighed by their estimated likelihood) is most likely already less than 80M.


And what is the cost of reducing the expected deaths for of those problems?

  • Humankind has already spent over a trillion dollars on addressing climate change.

  • Humankind so far has spent a couple million dollars on addressing existential AI risk.

Jump in the discussion.

No email address required.

No, I meant rationalize climate change in the same manner.

Jump in the discussion.

No email address required.

I said with regard to existential AI risk:

a 1% risk of total human extinction is worth spending significant resources on trying to avoid it

the analogous statement with regard to climate change would be

a 25% risk of 250M human excess deaths is worth spending significant resources on trying to avoid it

Jump in the discussion.

No email address required.

Do you personally subscribe to the worst climate projections (4°+)?

Jump in the discussion.

No email address required.

No, I think optimistic scenarios are most likely.

Jump in the discussion.

No email address required.

On what basis? Maybe consider that others do not subscribe to the same negativity about the feasibility of emergent AGI. There is less empirical backing for current LLMs spontaneously generating capability considered to be "general intelligence" than there is for the worst climate change scenarios.

Jump in the discussion.

No email address required.

On what basis?

Because the path is already pretty clear: We understand earth's climate very well. We know what needs to be done and we have a good idea how to do it. Most of the nations are on board (even if less enthusiastically than eurocucks, but enthusiastically enough that their political measures strongly outperform projections from ten years ago). The technology has made giant leaps (in price per energy stored and price per energy generated) far beyond what we expected, and if you talk to people working in relevant technologies, there is still room for improvement. E.g. in India the price for solar power projects (per energy generated) has fallen by over 80%, the price for energy storage has also fallen by almost 80%, thanks to both regulatory changes and technological progress.

And even in the worst case scenario humankind would only be reduced in population by 5%. That is not an existential threat for humankind, it's a hickup.

For the AGI alignment problem none of this applies.

  • We don't understand the problem well enough to address it. The 1 in 10000 people who even can be bothered to honestly think about the problem and the 1 in 1000 among those who may have the skills to try finding a solution, have no good idea for how to solve it. So there is currently no path to a solution.

  • Even if someone figured out a promising solution approach, there is close to zero popular interest and close to zero institutional support for implementing a solution if it costs more than a couple million dollars.

  • Here technology is also improving more rapidly than we expected, but the improvements are improvements in AI capabilities , not improvements in our ability to understand AI or our ability to understand how to align it. That's like the earth now heating up much faster than we were expecting ten years ago.

  • Also, climate change happens slowly. It becomes noticeable long before it becomes a deadly problem. AI risk is more like december 2019-february 2020 with covid -- some people realize there is a problem, they try to stop it from spreading, but most people don't want it to be a problem, because if it were a problem that would have a lot of bad implications. So (with few exceptions, e.g. Taiwan) the world didn't even try to limit the spread and simply waited until it was too late, then they started panicking. In the case of AI risk they will pretend there isn't a problem until 5 seconds before we are dead. And the world is far better prepared for pandemics (millions of people are employed worldwide for that purpose) than it is prepared for something that has never happened in all history and if it happens will be the last thing that happens.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.