What are the limiting factors on image generation?
Is it impossible to generate a high quality image at all on a like a current Pixel with NPUs or is it simply super fricking slow vs. an actual Nvidia GPU?
I'm been playing with LLM chat bots on my Mac Studio, it seems like RAM is the biggest limiting factor and then compute, but I guess GGUFs let you use disk storage too?
I have 64GB my friends said that was too much but I couldn't switch from my PC with 64GB to a Mac with less - especially being soldered and shared with the GPU. \
The sharing with the GPU/NPUs makes it awesome at this AI stuff though, even a 4090 only has 24GB VRAM.
Jump in the discussion.
No email address required.
What are the limiting factors on image generation?
Is it impossible to generate a high quality image at all on a like a current Pixel with NPUs or is it simply super fricking slow vs. an actual Nvidia GPU?
I'm been playing with LLM chat bots on my Mac Studio, it seems like RAM is the biggest limiting factor and then compute, but I guess GGUFs let you use disk storage too?
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
I have 64GB
my friends said that was too much but I couldn't switch from my PC with 64GB to a Mac with less - especially being soldered and shared with the GPU. \
The sharing with the GPU/NPUs makes it awesome at this AI stuff though, even a 4090 only has 24GB VRAM.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
it's vram not ram that limits, anything but using a GPU is going to be pretty unusable as soon as you go above the very worst models nevermind slow
Jump in the discussion.
No email address required.
More options
Context
More options
Context