Novel AI algorithm captures photons in motion

Phys.org  November 19, 2024
An international team of researchers (Canada, USA – Stanford University) presented an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. They used a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with picosecond-level temporal resolution. Combined with this dataset, they introduced an efficient neural volume rendering framework based on the transient field defined as a mapping from a 3D point and 2D direction to a high-dimensional, discrete-time signal that represented time-varying radiance at ultrafast timescales. They rendered a range of complex effects, including scattering, specular reflection, refraction, and diffraction. They also demonstrated removing viewpoint-dependent propagation delays using a time warping procedure, rendering of relativistic effects, and video synthesis of direct and global components of light transport… read more. Open Access TECHNICAL ARTICLE

A scene rendered using videos from an ultra-high-speed camera shows a pulse of light travelling through a pop bottle… Credit: University of Toronto.

Posted in AI and tagged , , , .

Leave a Reply