https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf
USA lost, China Won, glory to the CCP!
Orange Site:
https://news.ycombinator.com/item?id=42843131
BREAKING: DeepSeek officially announces another open-source AI model, Janus-Pro-7B.
— The Kobeissi Letter (@KobeissiLetter) January 27, 2025
This model generates images and beats OpenAI's DALL-E 3 and Stable Diffusion across multiple benchmarks. pic.twitter.com/FSJkelcaYP
NEWS: DeepSeek just dropped ANOTHER open-source AI model, Janus-Pro-7B.
— Rowan Cheung (@rowancheung) January 27, 2025
It's multimodal (can generate images) and beats OpenAI's DALL-E 3 and Stable Diffusion across GenEval and DPG-Bench benchmarks.
This comes on top of all the R1 hype. The 🐋 is cookin' pic.twitter.com/yCmDQoke0f
JUST IN:
— Megatron (@Megatron_ron) January 27, 2025
Another blow is coming from the Chinese DeepSeek AI
They launched now a multimodal "Janus-Pro-7B" model with image input and output. pic.twitter.com/akEfi9Zyzq
🚨
— Liang Wenfeng 梁文锋 (@LiangWenfeng_) January 27, 2025
DeepSeek just dropped ANOTHER open-source AI model, Janus-Pro-7B.
It's multimodal (can generate images) and beats OpenAI's DALL-E 3 and Stable Diffusion across GenEval and DPG-Bench benchmarks. pic.twitter.com/HVB1wBns1z
DeepSeek open-sources Janus Pro, beating Stable Diffusion and OpenAI's DALL-E 3🤯 pic.twitter.com/C50jQGHOHl
— Casper Hansen (@casper_hansen_) January 27, 2025
WAIT A SECOND, DeepSeek just dropped Janus 7B (MIT Licensed) - multimodal LLM (capable of generating images too) 🔥 pic.twitter.com/2kzaCJfLPt
— Vaibhav (VB) Srivastav (@reach_vb) January 27, 2025
https://boards.4chan.org/g/thread/104075936
https://boards.4chan.org/g/thread/104077316
https://boards.4chan.org/g/thread/104077293
https://old.reddit.com/r/LocalLLaMA/comments/1ibd5x0/deepseek_releases_deepseekaijanuspro7b_unified/
https://old.reddit.com/r/singularity/comments/1ibe4j7/deepseek_drops_multimodal_januspro7b_model/
https://old.reddit.com/r/DeepSeek/comments/1ibfed1/news_deepseek_just_dropped_another_opensource_ai/
https://old.reddit.com/r/singularity/comments/1ibdyou/deepseek_just_dropped_janus_7b_mit_licensed/
https://hexbear.net/post/4363677?scrollToComments=false
https://hexbear.net/post/4364578?scrollToComments=false
BlueSky:
DeepSeek has released a new set of multimodal AI models that it claims can outperform OpenAI’s DALL-E 3.The models are part of a new model family that DeepSeek is calling Janus-Pro. They range in size from 1 billion to 7 billion parameters.Read more here: tcrn.ch/40Bc5Qm
— TechCrunch (@techcrunch.com) 2025-01-27T21:38:20.589Z
https://rdrama.net/post/337205/deepseek-drops-multimodal-januspro7b-model-beating
Jump in the discussion.
No email address required.
!codecels any way to run this locally yet?
Jump in the discussion.
No email address required.
!codecels someone help Ed get his chinesium AI porn. You know it's going to generate horizonatal vajayays right?
Jump in the discussion.
No email address required.
Do you promise?
Jump in the discussion.
No email address required.
More options
Context
Jump in the discussion.
No email address required.
Factcheck: This claim has been confirmed as correct by experts.
it won't work if it's not in their training set
Jump in the discussion.
No email address required.
That's what people said about SDXL and now it generates the best twink gappings!
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
Treason.
It should be VVestern.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
You can run the deepseek r1 reasoning model locally using LM studio or something else, but this is some stable diffusion shit and I hate it. you can try it here
https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B
https://huggingface.co/spaces/NeuroSenko/Janus-Pro-7b
!codecels, From my random tests, it seems to be better at figuring out what images are rather than generating images, the images generated are of shit resolution, maybe if you run it locally it will be better
Jump in the discussion.
No email address required.
More options
Context
More options
Context