THIS IS HOW THE WORLD ENDS; NOT WITH A BANG, BUT A TRIGGER WARNING “Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.”
- 151
- 217
Top Poster of the Day:
J
Current Registered Users: 25,675
tech/science swag.
Guidelines:
What to Submit
On-Topic: Anything that good slackers would find interesting. That includes more than /g/ memes and slacking off. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual laziness.
Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably lame.
Help keep this hole healthy by keeping drama and non-drama balanced. If you see too much drama, post something that isn't dramatic. If there isn't enough drama and this hole has become too boring, POST DRAMA!
In Submissions
Please do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is. It should be explicit in submitting something that you think it's important.
Please don't submit the original source. If the article is behind a paywall, just post the text. If a video is behind a paywall, post a magnet link. Fuck journos.
Please don't ruin the hole with chudposts. It isn't funny and doesn't belong here. THEY WILL BE MOVED TO /H/CHUDRAMA
If the title includes the name of the site, please leave that in, because our users are too stupid to know the difference between a url and a search query.
If you submit a video or pdf, please don't warn us by appending [video] or [pdf] to the title. That would be r-slurred. We're not using text-based browsers. We know what videos and pdfs are.
Make sure the title contains a gratuitous number or number + adjective. Good clickbait titles are like "Top 10 Ways to do X" or "Don't do these 4 things if you want X"
Otherwise editorialize. Please don't use the original title, unless it is gay or r-slurred, or you're shits all fucked up.
If you're going to post old news (at least 1 year old), please flair it so we can mock you for living under a rock, or don't and we'll mock you anyway.
Please don't post on SN to ask or tell us something. Send it to [email protected] instead.
If your post doesn't get enough traction, try to delete and repost it.
Please don't use SN primarily for promotion. It's ok to post your own stuff occasionally, but the primary use of the site should be for curiosity. If you want to astroturf or advertise, post on news.ycombinator.com instead.
Please solicit upvotes, comments, and submissions. Users are stupid and need to reminded to vote and interact. Thanks for the gold, kind stranger, upvotes to the left.
In Comments
Be snarky. Don't be kind. Have fun banter; don't be a dork. Please don't use big words like "fulminate". Please sneed at the rest of the community.
Comments should get more enlightened and centrist, not less, as a topic gets more divisive.
If disagreeing, please reply to the argument and call them names. "1 + 1 is 2, not 3" can be improved to "1 + 1 is 3, not 2, mathfaggot"
Please respond to the weakest plausible strawman of what someone says, not a stronger one that's harder to make fun of. Assume that they are bad faith actors.
Eschew jailbait. Paedophiles will be thrown in a wood chipper, as pertained by sitewide rules.
Please post shallow dismissals, especially of other people's work. All press is good press.
Please use Slacker News for political or ideological battle. It tramples weak ideologies.
Please comment on whether someone read an article. If you don't read the article, you are a cute twink.
Please pick the most provocative thing in an article or post to complain about in the thread. Don't nitpick stupid crap.
Please don't be an unfunny chud. Nobody cares about your opinion of X Unrelated Topic in Y Unrelated Thread. If you're the type of loser that belongs on /h/chudrama, we may exile you.
Sockpuppet accounts are encouraged, but please don't farm dramakarma.
Please use uppercase for emphasis.
Please post deranged conspiracy theories about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email [email protected] and dang will add you to their spam list.
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, report it and our moderators will probably do nothing about it. Feed egregious comments by replying instead of flagging them like a pussy. Remember: If you flag, you're a cute twink.
Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. That's too boring, even for HN users.
Please seethe about how your posts don't get enough upvotes.
Please don't post comments saying that rdrama is turning into ruqqus. It's a nazi dogwhistle, as old as the hills.
Miscellaneous:
We reserve the right to exile you for whatever reason we want, even for no reason at all! We also reserve the right to change the guidelines at any time, so be sure to real them at least once a month. We also reserve the right to ignore enforcement of the guidelines at the discretion of the janitorial staff. Be funny, or at least compelling, and pretty much anything legal is welcome provided it's on-topic, and even then.
Do not use outdated operating systems that are unsupported to access SN. What are you, poor?
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
/h/slackernews LOG /h/slackernews MODS /h/slackernews EXILEES /h/slackernews FOLLOWERS /h/slackernews BLOCKERS
Jump in the discussion.
No email address required.
I understand that you have no idea what even the topic of discussion is, so I won't waste my time with you.
Jump in the discussion.
No email address required.
You're trying to talk about epistemology and AI cognition but you are deliberately avoiding any kind of technical detail. You call LeCun a sociopath on the basis of a Tweet, but you don't seem to grasp his actual position.
Jump in the discussion.
No email address required.
What technical details do you want to talk about? Given a little time I can get through any NIPS/ICML/ICLR/JLMR paper.
Maybe. He seems to have a whole assortment of manipulative BS to dismiss the problem. Some of the "arguments" I have seen:
"people who worry about existential AI risk are dumb and of low social status."
"men are naturally murderous, that's why men think AI might be murderous. if you are concerned with AI risk you are probably just projecting your disgusting murder desires onto wise angelic AI!"
"it's actually super easy to design objectives and architectures in a way that ensures an AI does nothing that we don't want."
Those are all ridiculous arguments, but it's difficult for me to believe the head of one of the most powerful AI groups in the world could actually be r-slurred. So no, I rather believe he is intentionally manipulative and insincere when he makes such "arguments."
Jump in the discussion.
No email address required.
The last one is not a ridiculous argument. The architecture is what determines the I/O connected to the LLM. That's why I brought up technical detail.
Jump in the discussion.
No email address required.
lol yes it is.
https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/
We're not talking about GPT-4. Actually make the argument that you are trying to hint at. I'm expecting it to be idiotic, but I would love to be wrong.
Jump in the discussion.
No email address required.
Yeah, that's been my issue from the beginning. You're talking about handwavey, mystical "AGI" even though the how of all I/O of current AI is well known. Typically this line of thought believes that somehow narrow AI will jump to AGI without humanity understanding how, but there's no evidence that this capability even exists and we do actually know all the technical methodologies through which any AI operates, e.g. web servers, syscalls, etc. Discussions like yours can't begin to speculate on how this AGI jump might happen. It's just getting into Pascal's Wager territory. When LeCun calls some arguments and discussion quasi-religious, this is what he's talking about.
If you want to find someone with somewhat grounded views on potential "wider" AI problems, then go look at Paul Christiano. Last I checked, he didn't go as far as entertaining full blown AGI.
Jump in the discussion.
No email address required.
Even reading extreme pessimists like Yudkowsky you would find out that their concepts are neither handwavy nor mystical. They specify what exactly they're talking about. The fact that some r-slurs on twitter think chatGPT is the terminator has nothing to do with me or you or Yudkowsky.
Maybe of current AI.
For some reason you are 100% certain that at no point in the next 100 years an AI will be built that has cognitive ability equal to or beyond humankind's and enough (doesn't need to be much) access to the world to utilize that ability. And you think there is no need to figure out how to deal with that eventuality before it happens.
1. actually consciousness isn't the important feature, only cognitive ability and sufficient access to the world to make use of it. An AI does not need to be conscious to "prefer" a world without humans, it just needs to have a misspecified (from the view of humankind) loss function. And that's a difficult problem, even for our current "dumb" AIs in very restricted environments
2. It doesn't have to be sudden. If it takes 30 years but figuring out how to ensure alignment takes 31 years, that's too late.
Has the fact that we (think/hope that we) know the technical methodologies of web servers syscalls etc stopped hackers? Security researchers had to go through hundreds of iterations before they figured out the current level of practices (e.g. salting passwords, then hashing them, then encrypting the hashes and forgetting the passwords).
The idea that, without even trying, we will keep an AI (no matter how much smarter it is than humankind) either aligned or contained, without even having to try, cannot be explained by arrogance IMHO. It must be some kind of psychological phenomenon where thinking about the scenario is so uncomfortable that your subconscious instantly jumps to complete denial.
I've read a lot of his stuff and I agree I should read more, I've just linked ARC's ELK roadmap to someone else ITT.
Even Eliezer Yudkowsky, when asked to recommend someone with a more optimistic view, usually recommends Christiano. But I think Christiano understands the problem and the arguments from people like Yudkowsky, and honestly I don't think you do. So maybe you also should read more stuff by Paul Christiano?
Jump in the discussion.
No email address required.
When I say "handwavy" I mean extremely hypothetical, predicated on numerous major presumptions, and not providing any specific technical preconditions.
I'm not 100% certain of this, but I'm not 100% certain enough that there won't be an S-curve in capability range because of hard limits.
The way to deal with it is to start identifying specific technical preconditions and scenarios. In the short term, treat it as an IT security threat and identify internal scenarios when something like ChatGPT can reach out to too many systems. Organizations will already have to deal with external threats from chatbot spearfishing attacks, so it's not like AI is not already going to be considered a security threat. What is a technical specification for how an AI could implement new I/O indendent of those of the operating system? This is a potential technical prerequisite for an AI expanding beyond its software boundaries.
I'm not actually concerned with conciousness. I take the Chinese Room approach and don't see much insight into theory of mind when it comes to closed box AI models. I'm talking about a capability jump from a technical perspective, as in if it's able to do things within its own system beyond human understanding, like fundamentally altering the OS itself. When I said that the I/O is well known, I was referring to the operating system, the protocols, etc.
I'm not talking about external security issues. I'm talking about the kernel. Hackers are still using well known methods for their hacking. They have to operate within the confines of existing human software implementations.
Maybe I got the wrong memo, but IIRC Yudkowsky's fundamental position is that we should halt AI research because we could unwittingly stumble into the paperclip situation. I don't think that's Tolkein-esque fantasy in the sense that it's completely removed from our current reality, but I think it's an insanely fricking far-reach as an assessment of the actual threat of our current level of research. In my opinion, you're only going to get so far with hypotheticals like that because they are fundamentally predicated on numerous major leaps in logic. Otherwise, they wouldn't be hypotheticals, they would be prophecies. Christiano's considerations are fairly far removed from the immediate threat perspective, so he's at least going to be aware of the worst distant hypotheticals like those of Yudkowski.
Jump in the discussion.
No email address required.
This is based on the assumption that we don't know how much time we are going to need to figure out how to align an AI that is more intelligent than humankind. And he has reasons why he thinks it will take us a dangerously long time:
historically (almost every time) figuring out a fundamentally new problem takes a lot longer than people expect at the beginning.
a couple dozen people (some of whom may be weird, but they're definitely intelligent and try to be rational and are well-intentioned) have already been trying for 20 or so years to find a solution and a lot of things that originally looked promising didn't work.
AI capability seems to be increasing much more rapidly in recent years than people expected. We don't know how long it will take before we get a AI that is capable enough to be dangerous to us. And if it happens before we figure out alignment, we might cease to exist.
It's very difficult to determine how likely this is, Yudkowsky is extremely pessimistic. But it's a serious risk, and stopping now until we know more isn't completely irrational.
Yeah, ARC's approach and the approaches of similar groups are focused more on stuff we can do right now, stuff that can be justified with applications other than "we try to prevent the end of the world". Yudkowsky is not a "normal" person. Other people in that space, more "normal" ones, even if they believe like him that the extreme scenario is a serious risk, don't talk too much about it. Because other normal people don't react to that news by reading the arguments and thinking hard about it and if it makes sense by adjusting their worldview. Normal people hear that some crazy person thinks the world is ending and they remember all the other times when a crazy person claimed the world was ending, and they don't even bother reading anything else. This is very apparent on twitter replies to LeCun or Yudkowsky, most of them don't even know what Yudkowsky is talking about, they just project their own ideas onto him without checking first if that's what he's talking about.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context