Jump in the discussion.

No email address required.

Can't you run these AI chat bots on your local machine?


:fawfulcopter:

Jump in the discussion.

No email address required.

Not really. A decent LLM is about 250GB, so good luck loading that unless you have a chain of Teslas cobbled together on some Linux shit.

Jump in the discussion.

No email address required.

Is 250gb even that much?

I haf a 4tb hard drive in like 2016, now i have that in SSDs in my current machine

250gb is like 1 call of duty game nowadays

Jump in the discussion.

No email address required.

VRAM, not storage.

Jump in the discussion.

No email address required.

ahhh ty my bad

Jump in the discussion.

No email address required.

average rdrama "stem"cel

Jump in the discussion.

No email address required.

Just string together a bunch of raspberry pi bro

https://media.tenor.com/wS9gJkWOuecAAAAx/coroca-keanu-reeves.webp

Jump in the discussion.

No email address required.

:#marseybrainlet:

Might as well delete after this embarrassment

Jump in the discussion.

No email address required.

nah

Jump in the discussion.

No email address required.

vram not storage space :marseynyan:

Jump in the discussion.

No email address required.

Just use your SSD for VRAM then :#marseygigaretard: hey microsoft, that'll be a monthly salary of 50k please thank you.

I swear to god codecels are so fricking stupid.

Jump in the discussion.

No email address required.

lol wut? Fimbulvetr-11B-v2.Q4_K_S.gguf is like 7 gb and works fine for roleplay

Jump in the discussion.

No email address required.

It's been a minute since I worked on local LLM shit but last I remember people were making models way smaller through quantization and still retaining a lot of the performance - did that just fizzle out because the tradeoffs were worse than people thought?

Jump in the discussion.

No email address required.

deepseek v2 lite needs only 40GB VRAM for inference (but a lot more for fine-tuning/training). cheapest way to deploy would be 3x 4060 ti (16GB version).

Jump in the discussion.

No email address required.

LLMs that run locally on consumer hardware are all a bit shit. The tradeoff is that you have a significantly worse experience than the frontier models.

Jump in the discussion.

No email address required.

You can do fine on new macs if you memorymaxx them

Jump in the discussion.

No email address required.

Yes that's what all the smart !coomers !codecels do.

What me and my "accidentally sent to an all female prison" roleplay bot do is between me and my living god

Jump in the discussion.

No email address required.

We can ERP if you ever get bored of the bot and want to retvrn :#grugblush:

Jump in the discussion.

No email address required.

I am biologically incapable of getting off unless carp is crying

I can dress up like carp

Jump in the discussion.

No email address required.

Dibs on being Gary Oak.

Jump in the discussion.

No email address required.



Link copied to clipboard
Action successful!
Error, please refresh the page and try again.