NVIDIA announced in late March a new technology called Neural Radiation Fields: NeRF. This promises to transform a collection of still images, clearly in two dimensions, into a digital 3D (three dimensional) scene in seconds. Everything was possible only by improving and testing intelligence artificial that he has available. Let's see in detail how it works and what is the effect.
NVIDIA Research turns 2D photos into 3D scenes in the blink of an eye with artificial intelligence. Here is shown as in a video
According to the company, the result is a neural rendering model that learns a high resolution 3D scene in seconds and can render the images of that scene in milliseconds, all with Artificial Intelligence (AI). Known as reverse rendering, NVIDIA's tuning process uses artificial intelligence for approximate the behavior of light in the real world, thus allowing developers to reconstruct a 3D scene from a handful of 2D images taken from different angles.
NVIDIA's research team has developed an approach that performs this task almost instantaneously, making it one of the first models of its kind to combine lightning-fast neural network "training" and rendering fast. In homage to the dawn of Polaroid imaging, NVIDIA Research has recreated an iconic photo of Andy Warhol who took a snapshot, turning it into a 3D scene using InstanT NeRF.
NVIDIA's Instant NeRF can be used for create avatars or scenes for the metaverse, enter videoconferencing participants or reconstruct scenes for 3D digital maps. David Luebke, vice president of research at NVIDIA Graphics, says:
While traditional 3D representations such as polygon meshes are similar to vector images, NeRFs are like bitmap images: they densely capture how light radiates from an object or scene
According to NVIDIA, Instant NeRF is the fastest NeRF technique developed to date. The model can render a 3D scene in about ten milliseconds.