Prompt: Epic anime artwork of a wizard atop a mountain at night casting a cosmic spell into the dark sky that says "Stable Diffusion 3" made out of colorful energy
Announcing Stable Diffusion 3 in early preview, our most capable text-to-image model with greatly improved performance in multi-subject prompts, image quality, and spelling abilities.
While the model is not yet broadly available, today, we are opening the waitlist for an early preview. This preview phase, as with previous models, is crucial for gathering insights to improve its performance and safety ahead of an open release. You can sign up to join the waitlist here.
The Stable Diffusion 3 suite of models currently range from 800M to 8B parameters. This approach aims to align with our core values and democratize access, providing users with a variety of options for scalability and quality to best meet their creative needs. Stable Diffusion 3 combines a diffusion transformer architecture and flow matching. We will publish a detailed technical report soon.
We believe in safe, responsible AI practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment. In preparation for this early preview, we've introduced numerous safeguards. By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model's public release.
Our commitment to ensuring generative AI is open, safe, and universally accessible remains steadfast. With Stable Diffusion 3, we strive to offer adaptable solutions that enable individuals, developers, and enterprises to unleash their creativity, aligning with our mission to activate humanity's potential.
If you'd like to explore using one of our other image models for commercial use prior to the Stable Diffusion 3 release, please visit our Stability AI Membership page to self host or our Developer Platform to access our API.
Discussions
https://news.ycombinator.com/item?id=39466630
https://old.reddit.com/r/StableDiffusion/comments/1ax6h0o/stable_diffusion_3_stability_ai/
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
Probably cucked, but there are different degrees. "Won't make porn of celebrities" is a much more reasonable limitation than "won't draw white people".
Jump in the discussion.
No email address required.
Wrong.
Jump in the discussion.
No email address required.
lmao its open source so all the coomers will restore your taytay faps pronto, dont worry
Jump in the discussion.
No email address required.
Not really because they choose what goes into the training data, and if they systemically avoid human anatomy to the point where even portraits look fricked then there isn't much you can do with fine tuning, SD1.5 is still miles better than SDXL because of this reason alone.
Jump in the discussion.
No email address required.
No its not lmao finetuned SDXL is miles better even for coomer shit. The best 1.5 photoreal models still lack detail and looks completely plastic and fake. The only downside of SDXL is it takes longer.
You just outed yourself as a poorcel with a shit GPU.
Jump in the discussion.
No email address required.
More options
Context
...then you can add it back by burning GPU hours on "fine-tuming" which is really just additional training.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
maybe this time I can finally feel safe while prompting![:marseycomfy: :marseycomfy:](https://i.rdrama.net/e/marseycomfy.webp)
Jump in the discussion.
No email address required.
More options
Context
More options
Context