Jump in the discussion.

No email address required.

Unbelievably fricking stupid. You see these guys on the Orange Site sometimes. They are convinced they can determine if a language model is conscious by examining its outputs. They seethe nonstop about the Chinese Room, because it obliterates them, and they post incredibly poor-quality rebuttals. The key takeaway is that a machine which appears to be "like" a conscious interlocutor may be conscious, but may also be an unthinking rules engine of enormous size. Searle further argues that consciousness would not arise in any event, as the mind and a computer are not similar, which is a more contentious claim.

This moron has completely fallen for the ELIZA effect, mistaking the output of an overgrown Markov chain bot for intellect. The computer is not a person. It isn't aware. It's a machine spitting back drivel.

Jump in the discussion.

No email address required.

I’ve had more intelligent discussions with ELIZA in emacs than with a lot of Redditors.

Jump in the discussion.

No email address required.

Just because an organic brain may be necessary for consciousness doesn't mean it's sufficient. Redditors can still be soulless NPCs in my book

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.