Jump in the discussion.

No email address required.

/g/ is a strong contender for worst board, absolutely no discussion of technology takes place there (black lives matter)

Jump in the discussion.

No email address required.

It's absolutely horrific. The only thing of value to come out of there this year was the "Usecase?" meme dunking on ebussy.

Jump in the discussion.

No email address required.

EBussy?

Jump in the discussion.

No email address required.

Tfw you'll never run a fx6350 at 5.1ghz on air again just too get gta5 above 60fps at 900p

Black trans lives matter

Jump in the discussion.

No email address required.

!nooticers Least obvious Nvidia falseflag. :marseynooticeglow:

Jump in the discussion.

No email address required.

NO, I SHALL NOT INSTALL YOUR SHITTY MEME DISTRO :marseyyikes:

NO, I SHALL NOT BUY AN OUTDATED THINKPAD LAPTOP THAT GETS 2 HOURS OF BATTERY LIFE :marseyyikes:

NO, I SHALL NOT WEAR THE PROGRAMMER SOCKS :marseyyikes:

NO, I SHALL NOT INSTALL A GIMPED, INSECURE BIOS :marseyyikes:

Jump in the discussion.

No email address required.

I WILL BUY THE MACBOOK

!applechads

Jump in the discussion.

No email address required.

I replaced my neurodivergently tuned Arch Linux thinkpad with a MacBook Pro and I have no regrets.

Jump in the discussion.

No email address required.

I replaced my Fedora amd rig with a Mac Studio and it's glorious.

Jump in the discussion.

No email address required.

based

I've thought about getting a Studio to run some bigger LLMs locally. My 64 GiB MacBook can't quite run a high quant 70b model, but a Studio definitely could. I'll probably just wait until next cycle to see if they bump the Studio and/or MacBook Pro up to 256 GiB.

Jump in the discussion.

No email address required.

how were you running the model? I don't have issues using 70b GGUF models on 64GBs but I'm very much in the playing around phase- not an expert at all.

the next cycle should be tomorrow with M4.

Jump in the discussion.

No email address required.

I'm on an M1 so speed is an issue even with q3 quants of midnight miqu. Given how cheap openrouter is, I'm happy to just use that most of the time.

Jump in the discussion.

No email address required.

midnight miqu

:blush:

IDK my setup using miqu on an M2 Max with 64G is pretty quick- have you tried koboldcpp? it uses Metal APIs properly.

Jump in the discussion.

No email address required.

I think I was using lmstudio. What quant are you running? I run a q8 miqu on runpod and it's quite good, but slightly annoying to spin up and down (hence why I switched to openrouter).

Jump in the discussion.

No email address required.

More comments

Stop using fascist Gentoo. Switch to the tolerant and egalitarian macOS.

Jump in the discussion.

No email address required.

https://i.rdrama.net/images/1729451560114319.webp

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.