Patrick Noras

Biography

I'm a Masters's student in Computer Science at RPTU Kaiserslautern specializing in 3D Computer Vision. Alongside my studies, I work as a graduate research assistant at the German Research Center for Artificial Intelligence (DFKI) in the Augmented Vision group under the supervision of Prof. Didier Stricker.

Prior to this, I completed my B.Sc. in Computer Science at RPTU Kaiserslautern. During my studies, I was an undergraduate software developer at SmartFactory KL and a visiting graduate researcher at UNC Chapel Hill , where I worked under the mentorship of Prof. Roni Sengupta, supported by a DAAD-PROMOS scholarship.

Research

My primary research interests lie in image-based rendering and its applications in 3D reconstruction and inverse rendering. I predominantly work with 3D Gaussian Splatting (3DGS) but also have experience with Neural Radiance Fields (NeRFs).

Currently, my research focuses on inverse rendering with Gaussian Splatting in sparse-view settings, aiming to recover accurate geometric and material properties of real-world scenes.

Featured Projects

GAINS

GAINS: Gaussian-based Inverse Rendering from Sparse Multi-View Captures

Patrick Noras, Jun Myeong Choi, Didier Stricker, Pieter Peers, Roni Sengupta

GAINS is a Gaussian-based inverse rendering framework that uses learning-based priors to improve geometry and material recovery from sparse multi-view captures, achieving high-quality relighting and novel-view synthesis.

Monocular360° GS

Monocular360° GS

This project introduces Monocular360° GS, a method for monocular panoramic image-based rendering that generates parallax through partial ground-truth views and uses inpainting to handle occlusions, achieving improved results on real-world scenes.

Bachelorthesis

Performance and Accuracy Assessment of Nvidia's Omniverse Isaac Sim for Generating Synthetic Data from Real-world Scenarios

I created this project as part of my undergraduate thesis to evaluate NVIDIA's Omniverse Isaac Sim for generating realistic synthetic stereo-camera images and LiDAR point clouds by comparing simulated sensor data to a real-world scene.