This is video of someone playing it. It's 100% generated images @ 20 FPS with only a 3-second "memory" of the previous frames and user input which is enough to infer literally everything else for long periods of gameplay. There is no polygons or rendering going on, it's literally making shit up as it goes along based on the model's neural network training or some shit blah blah blah
Article w/more videos:
https://gamengen.github.io/
Diffusion Models Are Real-Time Game Engines
Full PDF Paper:
https://arxiv.org/pdf/2408.14837
ABSTRACT:
We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.
(...)
Summary. We introduced GameNGen, and demonstrated that high-quality real-time game play at 20 frames per second is possible on a neural model. We also provided a recipe for converting an interactive piece of software such as a computer game into a neural model.
Limitations. GameNGen suffers from a limited amount of memory. The model only has access to a little over 3 seconds of history, so it's remarkable that much of the game logic is persisted for drastically longer time horizons. While some of the game state is persisted through screen pixels (e.g. ammo and health tallies, available weapons, etc.), the model likely learns strong heuristics that allow meaningful generalizations. For example, from the rendered view the model learns to infer the player's location, and from the ammo and health tallies, the model might infer whether the player has already been through an area and defeated the enemies there. That said, it's easy to create situations where this context length is not enough. Continuing to increase the context size with our existing architecture yields only marginal benefits (Section 5.2.1), and the model's short context length remains an important limitation. The second important limitation are the remaining differences between the agent's behavior and those of human players. For example, our agent, even at the end of training, still does not explore all of the game's locations and interactions, leading to erroneous behavior in those cases.
!oldstrags !g*mers @pizzashill
In AI Nvidia future, game plays you
Jump in the discussion.
No email address required.
Big if true (I have no idea what any of this means)
Jump in the discussion.
No email address required.
Computer doesn't draw images based on its data of what the world looks like. Computer draws next frame based on previous frame while you're pressing buttons.
Jump in the discussion.
No email address required.
This!
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context