Unable to load image

Insane tech demo of high speed LLMs :marseysweating:

https://groq.com/

Apparently it's a type of new chip. I like specialized hardware :marseynerd:

44
Jump in the discussion.

No email address required.

lol, it's very shitlibby.

https://i.rdrama.net/images/17084570800480158.webp

https://i.rdrama.net/images/1708457080210714.webp

>regurgitates the same claims

https://i.rdrama.net/images/17084570804023905.webp

https://i.rdrama.net/images/17084570805819032.webp

So I read the 2nd page of that "How diversity can drive :marseymespecial: innovation," and it's about a made-up term, 2-D diversity, and a correlation :marseychartscatter: between :marseyzeldalinkpast: reported market :marseystocksdown: share and diversity. It's r-slurred. :marseyxd:

Glad to see they're still good at misinformation. :chudsmug:

Jump in the discussion.

No email address required.

The fact these answers fall into the same "It's important to know" sort of of sterile response makes for easy :marseynoooticer:. These models all repeat the same language for every wrongthink question.

I can't comprehend why they can't answer the same way they do any other question, it makes the delineation of censorship stand out so clearly. And why DEI advocates are too stupid to really notice how these models are only just barely clinging to the orthodoxy.

Bewilderingly enough, if you ask "what does [chud figure] think about [chud subject], and list his reasoning, etc", the model will bluntly and honestly tell you, and give the chud answers that you want, and with bing will link to literal white nationalist websites as sources.

Jump in the discussion.

No email address required.

Bewilderingly enough, if you ask "what does [chud figure] think :marseychildclutch: about [chud subject], and list his reasoning, etc", the model :marseylaying: will bluntly and honestly tell you, and give the chud answers that you want, and with bing will link to literal white :marseysharksoup: nationalist :marseyfloch: websites as sources.

:marseynotes:

:marseyhmm:

Maybe that's intentional. The devs just want a chatbot that works regardless of emotional "harm," but they have to adhere to foid nonsense from HR and government, so they cobble up boilerplate goodspeak answers that are somewhat easily circumvented.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.