Hello, I'm Lu Sang

I am a PhD student at the Computer Vision Group at TU Munich under the guidance of Prof. Cremers. This is an overview over projects I was personally involved in. Check out more awesome work from our group at https://cvg.cit.tum.de/publications


News

  • December 2023: “Coloring the Past: Neural Historical Buildings Reconstruction from Archival Photography” is now on Arxiv. Check the project page for our work.
  • November 2023: “Erasing the Ephemeral: Joint Camera Refinement and Transient Object Removal for Street View Synthesis” is now on Arxiv. Check the project page for our work.
  • September 2023: “Enhancing Surface Neural Implicits with Curvature-Guided Sampling and Uncertainty-Augmented Representations” is now on Arxiv.
  • January 2023: “High-Quality RGB-D Reconstruction via Multi-View Uncalibrated Photometric Stereo and Gradient-SDF” was accepted to WACV2023
  • April 2022: “Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction” was accepted to CVPR2022
  • March 2020: “Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach” was accepted to WACV2020
  • December 2019: Started my PhD at the Computer Vision Group at TUM under the guidance of Prof. Daniel Cremers

Publications

Erasing the Ephemeral: Joint Camera Refinement and Transient Object Removal for Street View Synthesis

Erasing the Ephemeral: Joint Camera Refinement and Transient Object Removal for Street View Synthesis

ArXiv, 2023

Our method generates high-fidelity street view imagery while autonomously managing dynamic moving objects, eliminating the need for manual annotations. Additionally, we simultaneously refine the initial camera poses to enhance the quality of the renderings.

Project Page | Paper | Code | Video | Citation


Coloring the Past: Neural Historical Buildings Reconstruction from Archival Photography

Coloring the Past: Neural Historical Buildings Reconstruction from Archival Photography

Arxiv, 2023

An approach to reconstruct the geometry of historical buildings, employing volumetric rendering techniques. We leverage dense point clouds as a geometric prior and introduce a color appearance embedding loss to recover the color of the building given limited available color images.

Project Page | Paper | Code | Video | Citation


Enhancing Surface Neural Implicits with Curvature-Guided Sampling and Uncertainty-Augmented Representations

Enhancing Surface Neural Implicits with Curvature-Guided Sampling and Uncertainty-Augmented Representations

Arxiv, 2023

we introduce a sampling method with an uncertainty-augmented surface implicit representation that employs a sampling technique that considers the geometric characteristics of inputs. We introduce a strategy that efficiently computes differentiable geometric features, namely, mean curvatures, to augment the sampling phase during the training period.

Project Page | Paper | Code | Citation


High-Quality RGB-D Reconstruction via Multi-View Uncalibrated Photometric Stereo and Gradient-SDF

High-Quality RGB-D Reconstruction via Multi-View Uncalibrated Photometric Stereo and Gradient-SDF

WACV, 2023

A novel formulation of physically realistic image model(s) compatible with a volumetric representation which enables effective optimization on actual surface points.

Project Page | Paper | Code | Citation


Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction

Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction

CVPR, 2022

A novel representation for 3D geometry that combines the advantages of implict and explicit representations. By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations with approaches originally formulated for explicit surfaces.

Project Page | Paper | Code | Citation


Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach

Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach

WACV, 2020

A novel approach towards depth map super-resolution using multi-view uncalibrated photometric stereo is presented. Practically, an LED light source is attached to a commodity RGB-D sensor and is used to capture objects from multiple viewpoints with unknown motion. This nonstatic camera-to-object setup is described with a nonconvex variational approach such that no calibration on lighting or camera motion is required due to the formulation of an end-to-end joint optimization problem. Solving the proposed variational model results in high resolution depth, reflectance and camera pose estimates, as we show on challenging synthetic and real-world datasets.

Project Page | Paper | Code | Video | Citation