Phys.org November 19, 2024
An international team of researchers (Canada, USA – Stanford University) presented an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. They used a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with picosecond-level temporal resolution. Combined with this dataset, they introduced an efficient neural volume rendering framework based on the transient field defined as a mapping from a 3D point and 2D direction to a high-dimensional, discrete-time signal that represented time-varying radiance at ultrafast timescales. They rendered a range of complex effects, including scattering, specular reflection, refraction, and diffraction. They also demonstrated removing viewpoint-dependent propagation delays using a time warping procedure, rendering of relativistic effects, and video synthesis of direct and global components of light transport… read more. Open Access TECHNICAL ARTICLE