In recent years, virtual and augmented reality have gained widespread attention because of newly developed head-mounted displays. For the first time, mass-market penetration seems plausible. Also, range sensors are on the verge of being integrated into smartphones, evidenced by prototypes such as the Google Tango device, making ubiquitous on-line acquisition of 3D data a possibility. The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame. In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level.

Funding

  • FWF P32418-N31

Team

Research Areas

  • Uses concepts from applied mathematics and computer science to design efficient algorithms for the reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Example applications are collision detection, reconstruction, compression, occlusion-aware surface handling and improved sampling conditions.
  • In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.

Publications

8 Publications found:
Image Bib Reference Publication Type
2024
Diana MarinORCID iD, Stefan OhrhallingerORCID iD, Michael WimmerORCID iD
Parameter-free connectivity for point clouds
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1, HUCAPP and IVAPP, pages 92-102. February 2024.
[paper]
Conference Paper
PPSurf teaser with comparison Philipp ErlerORCID iD, Lizeth Fuentes-PerezORCID iD, Pedro Hermosilla-CasajusORCID iD, Paul Guerrero, Renato Pajarola, Michael WimmerORCID iD
PPSurf: Combining Patches and Point Convolutions for Detailed Surface Reconstruction
Computer Graphics Forum, 43(1):tbd-tbd, January 2024. [paper] [teaser] [Live System] [Repo (Github)]
Journal Paper with Conference Talk
Amal Dev Parakkat, Stefan OhrhallingerORCID iD, Elmar Eisemann, Pooran Memari
BallMerge: High‐quality Fast Surface Reconstruction via Voronoi Balls
Computer Graphics Forum, 43(2), 2024. [paper]
Journal Paper with Conference Talk
2021
Alexius Rait
Fast Radial Search for Progressive Photon Mapping
[thesis]
Bachelor Thesis
2020
Markus Schütz, Stefan OhrhallingerORCID iD, Michael WimmerORCID iD
Fast Out-of-Core Octree Generation for Massive Point Clouds
Computer Graphics Forum, 39(7):1-13, November 2020. [paper]
Journal Paper with Conference Talk
We present Points2Surf, a method to reconstruct an accurate implicit surface from a noisy point cloud. Unlike current data-driven surface reconstruction methods like DeepSDF and AtlasNet, it is patch-based, improves detail reconstruction, and unlike Screened Poisson Reconstruction (SPR), a learned prior of low-level patch shapes improves reconstruction accuracy. 
Note the quality of reconstructions, both geometric and topological, against the original surfaces. The ability of Points2Surf to generalize to new shapes makes it the first learning-based approach with significant generalization ability under both geometric and topological variations. Philipp ErlerORCID iD, Paul Guerrero, Stefan OhrhallingerORCID iD, Michael WimmerORCID iD, Niloy Mitra
Points2Surf: Learning Implicit Surfaces from Point Clouds
In Computer Vision -- ECCV 2020, pages 108-124. October 2020.
[points2surf_paper] [short video]
Conference Paper
Markus Schütz, Gottfried Mandlburger, Johannes Otepka, Michael WimmerORCID iD
Progressive Real-Time Rendering of One Billion Points Without Hierarchical Acceleration Structures
Computer Graphics Forum, 39(2):51-64, May 2020. [paper]
Journal Paper with Conference Talk
Kurt Leimer, Andreas Winkler, Stefan OhrhallingerORCID iD, Przemyslaw Musialski
Pose to Seat: Automated design of body-supporting surfaces
Computer Aided Geometric Design, 79:1-1, April 2020. [image] [Paper] [paper]
Journal Paper (without talk)
Download list as Bibtex, HTML (Advanced, Expert), JSON (with referenced objects), CSV, Permalink

Details

Project Leader

Start Date

End Date