We present FLARE, a feed-forward model designed to infer high-quality camera poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8 inputs), which is a challenging yet practical setting in real-world applications. Our solution features a cascaded learning paradigm with camera pose serving as the critical bridge, recognizing its essential role in mapping 3D structures onto 2D image planes. Concretely, FLARE starts with camera pose estimation, whose results condition the subsequent learning of geometric structure and appearance, optimized through the objectives of geometry reconstruction and novel-view synthesis. Utilizing large-scale public datasets for training, our method delivers state-of-the-art performance in the tasks of pose estimation, geometry reconstruction, and novel view synthesis, while maintaining the inference efficiency (i.e., less than 0.5 seconds).
Given uncalibrated sparse views, our model can infer high-quality camera poses, geometry and appearance in a single feed-forward pass. We use camera poses as proxies to guide subsequent geometry and appearance learning. Given initial pose estimates, we first compute camera-centric geometry, then project it into a global scene representation. Finally, we form 3D Gaussians on top of the scene geometry to enable photo-realistic novel-view synthesis.
We gratefully acknowledge Tao Xu for his assistance with the evaluation, Yuanbo Xiangli for insightful discussions, and Xingyi He for his Blender visualization code. The webpage template is borrowed from https://signerf.jdihlmann.com/