Unable to load image

Scientific proof that ChatGPT turns kids into r-slurs :marseyscientist:

https://www.axios.com/2024/08/15/ai-tutors-learning-education-khan-academy-wharton

I got a message for all the robots out there, and I don't need no fricking AI to generate it for me:

:#marseyfuckyou:

20
Jump in the discussion.

No email address required.

I tried a couple of AI cowtools to help with some research I was doing for a publication. It was wrong half the time about documented findings and provided references that didn't exist.

Never again.

Jump in the discussion.

No email address required.

I was/am building an AI applet to help with my job (trades). I would like a virtual assistant that remembers all my job sites and the little undocumented details.

Because of the way the models work they can't effectively deal with a lot of very similar information. If you have a database of niche info all written a similar way, it will mix and match as it sees fit.

The only way I can see overcoming this is by indexing your data very specifically. Make a vector db with each research paper in your case, and then only ask it about specific papers by pointing it to a single db entry

Jump in the discussion.

No email address required.

Yea people really misuse LLMs. It's more the sort of tool you feed your own notes into and have it format them in a formal and spelling mistake free way than the sort of thing you use for research

That being said 99% of r-slurs who try to use it for research leave the temperature way to high, which leads the AI to be "creative". Turn that bad boy to .1 and suddenly it becomes way better at math and translating

Jump in the discussion.

No email address required.

Foundation models don't "understand" things, they guess at what seems most right. This is fine for a lot of cases, but is not how one should do mathematics. It'd likely be better at teaching reading comprehension or history

Jump in the discussion.

No email address required.

Fun fact: no company has ever been able to produce an AI which can consistently distinguish between monkeys and black people. Self-driving cars are here, so are universal voice translators, self-aiming guns, and pocket-computers which can recognize every single product in a live video. But distinguishing between monkeys and black people is as difficult as solving a millenium prize problem, teams of PHD computer scientists will be working on it for decades before they get a solution which works well enough to be media-outrage-proof.

When someone finally finds the solution, they won't be make headlines, but they'll be happy knowing they solved the AI problem of the century. They'll tell normies that they just fiddle around with facial recognition algorithms all day, but to people in the know, he'll be known as "Tom, the absolute genius who spent 26 years teaching google images how to tell blacks apart from apes".

Snapshots:

https://www.axios.com/2024/08/15/ai-tutors-learning-education-khan-academy-wharton:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.