If you missed the last few posts on this, Sora is the newest product from OpenAI that generates video from text prompts.
For a first public preview, it's pretty impressive. You can tell it's AI when you look for it, but your average infinite scroller won't suspect a thing.
https://openai.com/sora
A few sample videos.
On to the reaction posts.
The AI subs are gaga over it, as expected.
https://old.reddit.com/r/StableDiffusion/comments/1arm6df/sora_by_openai_looks_incredible/
https://old.reddit.com/r/singularity/comments/1arm2g5/introducing_sora_our_texttovideo_model_openai/
https://old.reddit.com/r/OpenAI/comments/1arm4ff/things_are_moving_way_too_fast_openai_on_x/
https://old.reddit.com/r/ChatGPT/comments/1arm7rf/sora_by_openai_looks_incredible_txt_to_video/
https://old.reddit.com/r/singularity/comments/1arm58d/we_are_so_back/
VFXcels
https://old.reddit.com/r/vfx/comments/1arn9t5/open_ai_announces_sora_text_to_video_ai_generation/
Sorta mixed reactions
https://old.reddit.com/r/aiwars/comments/1armtr3/openai_announces_sora_texttovideo_model/
Doomposting
https://old.reddit.com/r/Futurology/comments/1arnv9f/sora_creating_video_from_text/
Schizoposting
https://old.reddit.com/r/UFOs/comments/1aroo5e/video_of_potential_uaps_are_not_going_to_be/
Flabbergasted and impressed
https://old.reddit.com/r/MachineLearning/comments/1armmng/d_openai_sora_video_gen_how/
Filmcels
https://old.reddit.com/r/cinematography/comments/1artwbl/sora_makes_me_depressed_love_the_art_of/
https://old.reddit.com/r/editors/comments/1arrmbi/openai_announces_sora_today_introducing_their/
RSP
https://old.reddit.com/r/redscarepod/comments/1armuvg/its_over/
https://old.reddit.com/r/redscarepod/comments/1arummz/this_sora_ai_stuff_is_awful/
R-slurposting
https://old.reddit.com/r/conspiracy_commons/comments/1aro45h/its_over/
A few controversial posts.
https://old.reddit.com/r/vfx/comments/1arrvua/new_sora_ai_that_thing_has_the_power_to_replace/
https://old.reddit.com/r/vfx/comments/1art200/its_now_or_never/
https://old.reddit.com/r/wallstreetbets/comments/1arrumb/nvidia_will_cross_2000/ r-slurs wildly speculating
https://old.reddit.com/r/singularity/comments/1arnjtt/why_are_all_these_sora_videos_of_asian_people/
https://old.reddit.com/r/AskConservatives/comments/1arqo4n/thoughts_on_ai_and_especially_sora/
And many, many more https://old.reddit.com/search/?q=sora&sort=comments&t=day
Something for all! !fellas !redscarepod !kino !accelerationists !codecels
Jump in the discussion.
No email address required.
I always wondered what the AI companies were doing with all the money they get (I assumed it was buying coke and hookers) but but is really impressive, it's nice to see technology improving
Holodecks when? !trekkies !ss13
Jump in the discussion.
No email address required.
This type of model is hugely resource intensive because it basically makes individual frames by iterating endless noise images until it gets something that matches the prompt. Each frame can take hundreds of thousands of iterations.
Jump in the discussion.
No email address required.
They've gotta be abstracting it further to make minute long videos. This is a guess but they are probably generating space-time "frames" now of size 512 * 512 * 5 seconds for example, then upscaling. With the level of coherence I would be very surprised to learn it's frame by frame.
Jump in the discussion.
No email address required.
More options
Context
Diffusion models are relatively cheap compared to transformers and such but Sora uses a combination Transformer/Diffusion model so it's very likely that it's a huge resource hog
Soon tho just give it 4 years and we'll be running this shit in realtime locally at 120fps (AI interpolated frames, obviously) with generated stereoscopy pairs for use in VR
Jump in the discussion.
No email address required.
More options
Context
generating a sequence is less expensive than generating the same number of independent images.
they're not randomly generating images from the prompt and testing if they are consistent with the previous frame.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context