https://news.ycombinator.com/item?id=41058194
What this implies is future models will be even better at sounding smart but even more likely to hallucinate and give you wrong answers.
The future is r-slurred.
What this implies is future models will be even better at sounding smart but even more likely to hallucinate and give you wrong answers.
The future is r-slurred.
Jump in the discussion.
No email address required.
No shit.
I was actually running a presentation for work about granite/ollama modules and this was one of the risks I brought up. Good modules can't be just plug and play and forget it. They require constant maintenance if you want them performing well and accurately and not fall into delusions and bullshit.
Ours would have to be trained based on our user inputs and large datasets containing those.
Jump in the discussion.
No email address required.
More options
Context