Jump in the discussion.

No email address required.

Funny how Google stock took a dive when they launched this, but the Bing demo was equally wrong (making up finance numbers and lying about a product) and nobody even noticed until after all the funny maymays were up.

AI search is extremely impressive as a concept and even as a product, but to make it fundamentally reliable I'm not sure you can just give a language model internet access. It's still producing text based on plausibility, not fact. I feel like people want this AI explosion to happen right now because we've hit a point where the tech can talk sensibly, and in the public mind that's like 95% of AGI. But our definition of AI seems to be "acts human" rather than anything about intelligence itself. Some of these language model uses may be extremely impressive dead ends that go 95% of the way to the goal, but it can only be completed through a completely different path. Obviously this is only an issue on topics where facts are directly relevant, but that's pretty much every application we're being told is important.

Jump in the discussion.

No email address required.

making up finance numbers and lying about a product

So they trained it on comments by tech executives?

Jump in the discussion.

No email address required.

In this case it was making up information about someone else's finance numbers and products.

Jump in the discussion.

No email address required.

Google dropped a lot because it failed in the middle of their presentation.


https://i.postimg.cc/dVgyQgj2/image.png https://i.postimg.cc/d3Whbf0T/image.png

Jump in the discussion.

No email address required.

Stock changes aren't based on who's ahead. They're based on performance relative to expectations. People thought Google was in a leadership position for AI and found out that it's not so impressive. Microsoft pulled AI search out of a hat by partnering with OpenAI, beating expectations that had been set with Cortana and TayTay.

Jump in the discussion.

No email address required.

I see what you mean, but the narrative that came out was that the Bing AI is really good and just kind of scary/funny sometimes. No focus on its factual errors, or acknowledgement that its "scariest" and funniest parts are generally the ones that showcase the model's weakness, not its strength.

Jump in the discussion.

No email address required.

Most humans arent intelligent so who cares if AI is?

Jump in the discussion.

No email address required.

Some of these language model uses may be extremely impressive dead ends that go 95% of the way to the goal,

They're all dead ends. You don't get consciousness by slapping a bunch of shit together or running a ton of regressions and calling that machine learning.

Jump in the discussion.

No email address required.

I agree, but consciousness shouldn't be (and hopefully is not) the goal of AI research. I don't think an AI search needs to be a sentient being, and that would be an extremely terrible idea. The focus should be on what is useful to humans.

My idea is that a good AI search would be designed to search and cross reference the web for factual information, and would produce outputs in a form that was more rigid and not closely tied to a random language model. The actual AI work would be in the search and interpretation behavior itself instead of the composition. What we have instead is a language model that is using some real world information as an input, but has no actual (or even simulated) knowledge of anything.

Jump in the discussion.

No email address required.

My neighbor it's only intended to be a fancy search engine with an NLP interface, not a person. This is like calling cars a dead end because they can't fly

Jump in the discussion.

No email address required.

"Dead ends" toward robo-consciousness, the singularity, the whatever-AI-doomspeak-du-jour is.

:marseyagree:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.