Unable to load image

Scientific proof that ChatGPT turns kids into r-slurs :marseyscientist:

https://www.axios.com/2024/08/15/ai-tutors-learning-education-khan-academy-wharton

I got a message for all the robots out there, and I don't need no fricking AI to generate it for me:

:#marseyfuckyou:

20
Jump in the discussion.

No email address required.

I tried a couple of AI cowtools to help with some research I was doing for a publication. It was wrong half the time about documented findings and provided references that didn't exist.

Never again.

Jump in the discussion.

No email address required.

I was/am building an AI applet to help with my job (trades). I would like a virtual assistant that remembers all my job sites and the little undocumented details.

Because of the way the models work they can't effectively deal with a lot of very similar information. If you have a database of niche info all written a similar way, it will mix and match as it sees fit.

The only way I can see overcoming this is by indexing your data very specifically. Make a vector db with each research paper in your case, and then only ask it about specific papers by pointing it to a single db entry

Jump in the discussion.

No email address required.

Yea people really misuse LLMs. It's more the sort of tool you feed your own notes into and have it format them in a formal and spelling mistake free way than the sort of thing you use for research

That being said 99% of r-slurs who try to use it for research leave the temperature way to high, which leads the AI to be "creative". Turn that bad boy to .1 and suddenly it becomes way better at math and translating

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.