An A.I. algorithm that imagines what photos can’t show
Although by now, it sounds like something many apps can do, this new technology, instead, can give a 360° view starting from 2D photos very quickly and with higher quality.
The algorithm called Instant Neural Radiance Field (NeRF) uses an “inverse rendering” approach, a process to detect light from a bunch of 2D photos and reconstruct the 3D environment based on its different angles.
This neural network fills the blank space of the 3D environment and predicts the light colors from any direction to achieve amazing realism.
Thanks to deep learning anyone can get a 3D view quickly. In fact, Instant NeRF speeds up rendering by several orders of magnitude. It uses a technique called multi-resolution hash grid encoding, which was created by NVIDIA and is tailored to work on NVIDIA GPUs.
But what’s amazing about this technology is the ability to fill gaps in areas not shown by the 2D photos. Therefore, the approach can even work with occlusions, which occur when objects shown in one image are covered by obstructions in another, such as pillars.
This technology may be applied to robots and self-driving cars to make them better recognize the object size in the real world. In addition, it can be used to generate digital renderings for interior design or architecture in general.
NeRF can also be used to generate avatars or virtual scenes, capture video conference participants and their surroundings in 3D, and recreate scenes for 3D digital maps.
NVIDIA Research recreated a historic photo by Andy Warhol taking an instant photo, turning it into a 3D environment using NeRF in a tribute to the early days of Polaroid photographs.
Maybe in the future, we’ll be able to reconstruct environments from partial photos and videos even from a different time period. And it will be like a sort of time travel.