论文标题
灯塔:预测空间相互照明的照明量
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination
论文作者
论文摘要
我们提出了一种深度学习解决方案,用于通过输入窄基线立体图像对估算场景中任何3D位置的事件照明。从图像预测全局照明的先前方法要么仅预测整个场景的单个照明,要么单独估计每个3D位置的照明,而无需强制预测与相同的3D场景一致。取而代之的是,我们提出了一个深度学习模型,该模型估算了一个场景的3D体积RGBA模型,包括观察到的视野之外的内容,然后使用标准卷渲染来估算该卷中任何3D位置的事件照明。我们的模型经过没有任何地面真相3D数据的训练,仅需要在输入立体声对附近持有透视视图,而在每个场景中进行的球形全景作为监督,而不是对空间变化的照明估算的先前方法,这需要地面真相现场的几何形状进行训练。我们证明,我们的方法可以预测一致的空间变化照明,这些照明具有说服力,可以合理地重视并将高度镜头的虚拟对象插入真实图像中。
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. Previous approaches for predicting global illumination from images either predict just a single illumination for the entire scene, or separately estimate the illumination at each 3D location without enforcing that the predictions are consistent with the same 3D scene. Instead, we propose a deep learning model that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume. Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision, as opposed to prior methods for spatially-varying lighting estimation, which require ground truth scene geometry for training. We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.