Publications

96 Projects found:

The Research Cluster "Smart Communities and Technologies" (Smart CT) at TU Wien will provide the scientific underpinnings for next-generation complex smart city and communities infrastructures. Cities are ever-evolving, complex cyber physical systems of systems covering a magnitude of different areas. The initial concept of smart cities and communities started with cities utilizing communication technologies to deliver services to their citizens and evolved to using information technology to be smarter and more efficient about the utilization of their resources. In recent years however, information technology has changed significantly, and with it the resources and areas addressable by a smart city have broadened considerably. They now cover areas like smart buildings, smart products and production, smart traffic systems and roads, autonomous driving, smart grids for managing energy hubs and electric car utilization or urban environmental systems research.



3D spatialization creates the link between the internet of cities infrastructure and the actual 3D world in which a city is embedded in order to perform advanced computation and visualization tasks. Sensors, actuators and users are embedded in a complex 3D environment that is constantly changing. Acquiring, modeling and visualizing this dynamic 3D environment are the challenges we need to face using methods from Visual Computing and Computer Graphics. 3D Spatialization aims to make a city aware of its 3D environment, allowing it to perform spatial reasoning to solve problems like visibility, accessibility, lighting, and energy efficiency.


no funding
Contact: Michael WimmerORCID iD
started 1. December 1993 X-Mas Cards
Every year a christmas card showing aspects of our research projects is produced and sent out.
no funding
started 1. January 2000 VRVis Competence Center

The VRVis K1 Research Center is the leading application oriented research center in the area of virtual reality (VR) and visualization (Vis) in Austria and is internationally recognized. You can find extensive Information about the VRVis-Center here


FFG COMET K1-Zentrum
1. March 2020 - 29. February 2028 Advanced Computational Design
no funding
no funding

Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.


no funding
Contact: Eduard GröllerORCID iD
1. May 2023 - 30. April 2026 Joint Human-Machine Data Exploration

Wider research context

In many domains, such as biology, chemistry, medicine, and the humanities, large amounts of data exist. Visual exploratory analysis of these data is often not practicable due to their size and their unstructured nature. Traditional machine learning (ML) requires large-scale labeled training data and a clear target definition, which is typically not available when exploring unknown data. For such large-scale, unstructured, open-ended, and domain-specific problems, we need an interactive approach combining the strengths of ML and human analytical skills into a unified process that helps users to "detect the expected and discover the unexpected". 

Hypotheses

We hypothesize that humans and machines can learn jointly from the data and from each other during exploratory data analysis. We further hypothesize that this joint learning enables a new visual analytics approach that reveals how users' incrementally growing insights fit the data, which will foster questioning and reframing

Approach

We integrate interactive ML and interactive visualization to learn about data and from data in a joint fashion. To this end, we propose a data-agnostic joint human-machine data exploration (JDE) framework that supports users in the exploratory analysis and the discovery of meaningful structures in the data. In contrast to existing approaches, we investigate data exploration from a new perspective that focuses on the discovery and definition of complex structural information from the data rather than primarily on the model (as in ML) or on the data itself (as in visualization).

Innovation

First, the conceptual framework of JDE introduces a novel knowledge modeling approach for visual analytics based on interactive ML that incrementally captures potentially complex, yet interpretable concepts that users expect or have learned from the data. Second, it proposes an intelligent agent that elicits information fitting the users' expectations and discovers what may be unexpected for the users. Third, it relies on a new visualization approach focusing on how the large-scale data fits the users' knowledge and expectations, rather than solely the data. Fourth, this leads to novel exploratory data analysis techniques -- an interactive interplay between knowledge externalization, machine-guided data inspection, questioning, and reframing.

Primary researchers involved

The project is a joint collaboration between researchers from TU Wien (Manuela Waldner) and the University of Applied Sciences St. Pölten (Matthias Zeppelzauer), Austria, who contribute and join their complementary expertise on information visualization, visual analytics, and interactive ML.

 

FWF Stand-alone project P 36453

DOI: 10.55776/P36453

no funding
1. May 2020 - 30. April 2026 Modeling the World at Scale
Vision: reconstruct a model of the world that permits online level-of-detail extraction.
WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds ICT19-009 - € 578.450
1. January 2024 - 31. December 2024 Bringing Point Clouds to WebGPU
no funding
Webpage: potree.org
Contact: Markus Schütz
1. December 2021 - 30. November 2024 Unstable Bodies
no funding
Contact: Michael WimmerORCID iD
1. September 2021 - 31. August 2024 Smart automated check of BIM models with real buildings
no funding
Contact: Hannes KaufmannORCID iD
1. July 2022 - 30. June 2024 Ecosystem Modeling Using Rendering Methods
no funding
Contact: Michael WimmerORCID iD
1. December 2020 - 31. March 2024 Photogrammetry made easy
no funding
Contact: Philipp ErlerORCID iD
no funding
Contact: Hannes KaufmannORCID iD
1. September 2019 - 31. August 2023 Superhumans - Walking Through Walls

In recent years, virtual and augmented reality have gained widespread attention because of newly developed head-mounted displays. For the first time, mass-market penetration seems plausible. Also, range sensors are on the verge of being integrated into smartphones, evidenced by prototypes such as the Google Tango device, making ubiquitous on-line acquisition of 3D data a possibility. The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame. In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level.


FWF P32418-N31 - 332.780,70 €
EU 7th Framework Program
Contact: Hannes KaufmannORCID iD
1. February 2020 - 31. October 2022 Virtual Reality Tennis Trainer

This research project focuses on 3D motion analysis and motion learning methodologies. We design novel methods for automated analysis of human motion by machine learning. These methods can be applicable in real training scenario or in VR training setup. The results of our motion analysis can help players better understand the errors in their motion and lead to improvement of motion performance. Our motion analysis methods are based on professional knowledge from tennis experts from our partner company VR Motion Learning GmbH & Co KG. We use numerous motion features, including rotations, positions, velocities and others, to analyze the motion.



Our goal is to use virtual reality as scenario for learning correct tennis technique that will be applicable in real tennis game. For this purpose, we plan to join our motion analysis with error visualization techniques in 3D and with novel motion learning methodologies. These methodologies may lead to learning correct sport technique, improvement of performance and prevention of injuries.


no funding
Contact: Hannes KaufmannORCID iD

The BRIDGES project aims at “bridging” the gap between interactive technologies and industries by bringing XR to the real world!

Our mission is moving towards the “democratisation” of XR by delivering a flexible and scalable solution that can be easily integrated and customised to the needs of a variety of different stakeholders.



For more information please refer to:



https://www.bridges-horizon.eu/



 


no funding
Contact: Hannes KaufmannORCID iD
This Marie-Curie project creates a leading European-wide doctoral college for research in Advanced Visual and Geometric
Computing for 3D Capture, Display, and Fabrication.
Horizon 2020 Marie Sklodowska-Curie Actions (MSCA) ITN 813170
Contact: Michael WimmerORCID iD

Industrial Building Design is a design process where the successful implementation of each project is based on collaborative decision making of multiple domain specialists - architects, engineers, production system planners and building owners. Traditionally, such multi-collaborator workflows are subject to conflicting stakeholder goals and frequent changes in production processes inevitably resulting in lengthy planning periods. This particular design process needs novel approaches to decision-making support which would combine the ability to communicate design intent with real-time feedback on the impact of design decisions.

BimFlexi project aims to accelerate BIM design processes for industrial buildings by using parametric modelling, multi-parameter optimization and collaborative VR exploration and modification of models at early stages of building planning.


FFG
Contact: Hannes KaufmannORCID iD
1. January 2021 - 31. July 2022 Denoising for Real-Time Ray Tracing
no funding
Contact: Hannes KaufmannORCID iD
no funding
Contact: Hannes KaufmannORCID iD
1. September 2019 - 31. August 2021 Wohnen 4.0 - Digital Platform for Affordable Housing

This is a joint project with the civil engineering faculty and several companies. Its aim is the development of an Integrated Framework “Housing 4.0”; a digital platform supporting integrated planning and project delivery through coupling various digital tools and databases, like Building Information Modeling (BIM) for Design to Production  and Parametric Habitat Designer.



Our goal is to exploit the potential of BIM for modular, off-site housing assembly in order to improve planning and construction processes, reduce cost and construction time and allow for mass customization will be explored.



The novel approach in this project is user-involvement; which has been neglected in recent national and international projects on off-site, modular construction supported by digital technologies. A parametric design tool should allow different stakeholders to explore both high-level and low-level options and their impact on the construction project so that mutually optimal solutions can be found easier.


FFG
Contact: Michael WimmerORCID iD
1. March 2016 - 31. August 2021 Computational Design of Geometric Materials
In this project we want to research novel materials whose mechanical behavior is described by the complexity of their geometry. Such “geometric materials” are cellular structures whose properties depend on the shape and the connectivity of their cells, while the actual physical substance they are built of is constant across the entire object.
WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds
1. January 2013 - 31. December 2020 Visual Computing: Illustrative Visualization

The central focus of our research is to understand visual abstraction. Understanding means 1. to identify meaningful visual abstractions, 2. to assess their effectiveness for human perception and cognition and 3. to formalize them to be executable on a computational machinery. The outcome of the investigation is useful for designing visualizations for a given scenario or need, whose effectiveness can be quantified and thus the most understandable visualization design can be effortlessly determined. The science of visualization has already gained some understanding of structural visual abstraction. When for example illustrators, artists, and visualization designers convey certain structure, or visually express how things look, we can often provide a scientifically-founded argument whether and why is their expression effective for human cognitive processing. What has not been given sufficient scientific attention to, is advancing the understanding of procedural visual abstraction, in other words investigating visual means that convey what things do or how things work. This missing piece of knowledge would be very useful for visual depiction of processes and dynamics that are omnipresent in science, technology, but also in our everyday lives. The upcoming project will therefore investigate theoretical foundations for visualization of processes. Physiological processes that describe the complex machinery of biological life will be picked as a target scenario. The reason for this choice is two-fold. Firstly, these processes are immensely complex, are carried-out on various spatial and temporal levels simultaneously, and can be sufficiently understood only if all scales are considered. Secondly, physiological processes have been modeled as a result of intensive research in biology, systems biology, and biochemistry and are available in a form of digital data. The goal will be to visually communicate how physiological processes participate on life by considering the limitations of human perceptual and cognitive capabilities. By solving individual visualization problems of this challenging target scenario, the research will provide first pieces of understanding of procedural visual abstractions that are generally applicable, beyond the chosen target domain. Prototype implementation of the developed technology is available at the GitHub repository: https://github.com/illvisation/



Prototype implementation of the developed technology is available at the GitHub repository:

https://github.com/illvisation/



cellVIEW



cellVIEW is a new tool that provides fast rendering of very large biological macromolecular scenes and is inspired by state-of-the-art computer graphics techniques. Click here for additional information.



Invited Talks



18.11.2016: Arthur J. Olson, Envisioning the Visible Molecular Cell

17.10.2016: Kwan-Liu Ma, Emerging Topics for Visualization Research: Part1, Part2

07.10.2016: Marc Streit, From Visual Exploration to Storytelling and Back Again

04.12.2015: Jan Palacek, Visual Analysis of Protein Complexes: From Protein Interaction to Cellular Processes

19.04.2013: Jan Koenderink, Shape in Visual Awareness


EU 7th Framework Program PCIG13-GA-2013-618680
WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds VRG11-010
Contact: Ivan ViolaORCID iD
1. December 2015 - 30. November 2020 Real-Time Shape Acquisition with Sensor-Specific Precision
Acquiring shapes of physical objects in real time and with guaranteed precision to the noise model of the sensor devices.
FWF P24600-N23
Contact: Michael WimmerORCID iD
1. November 2015 - 31. October 2020 Path-Space Manifolds for Noise-Free Light Transport
The project aims to develop new statistical and algorithmic methods to improve light-transport simulation for offline rendering.
FWF P27974
Contact: Michael WimmerORCID iD
The aim of this research project is the development of a construction site-suitable augmented reality (AR) system included a Remote-Expert-System and a BIM-Closed-Loop data transfer system for improving the quality of construction, building security and energy efficiency as well as increasing the efficiency of construction investigation.
FFG 867375
Contact: Hannes KaufmannORCID iD
The aim of the project is to increase the resources- and energy efficiency through coupling of various digital technologies and methods for data capturing (geometry and materials composition) and modelling (as-built BIM), as well as through gamification.

Collaborative project with several companies and institutes.
FFG 867314 - Stadt der Zukunft
Contact: Michael WimmerORCID iD
23. May 2016 - 30. June 2020 Smile Designer 3D
no funding
Contact: Eduard GröllerORCID iD
KAUST Center Partnership Fund OSR-2019-CPF-4108.1 - Projektsumme: US$ 121.191,00
Contact: Eduard GröllerORCID iD
This project will research methods to test and compare global-illumination algorithms as well as filtering algorithms, and also develop test data sets for this purpose.
FWF ORD 61
Contact: Michael WimmerORCID iD
The aim of this project is to investigate and to contribute to shape modeling and geometry processing for personal fabrication---a trend that currently receives intensified attention in the science and industry. Our goal is to contribute novel algorithmic solutions for fabrication-aware shape processing and interactive modeling.
FWF P27972-N31
14. November 2016 - 31. December 2019 Animated Cell Tab development
Animated Cell Tab development
no funding
Contact: Ivan ViolaORCID iD
FWF - I 2953-N31
Integrative Visual Abstraction of Molecular Data
FWF I 2953-N31
Contact: Ivan ViolaORCID iD
The goal of this project is the development of a systematic evaluation methodology, the evaluation of AR controllers for industrial tasks by utilizing the developed methodology and the publication of guidelines for developers of AR controllers, user interface designers, AR developers in general and the AR research community.
FFG
Contact: Hannes KaufmannORCID iD
The aim of the project is to improve the digital fabrication of dental prosthetic devices. We employ state of the art visualization techniques to enable a dental pretreatment preview for the patients.
FFG 861168 - Basisprogramm Einzelprojekt

In living systems, one molecule is commonly involved in several distinct physiological functions. The roles of molecules are commonly summarized in pathway diagrams, which, however, are abstract, hierarchically nested and thus is difficult to comprehend especially by non-expert audience. The primary goal of this research in visualization is to intuitively support the comprehensive understanding of relationships among biological networks using interactively computed illustrations. Illustrations, especially in textbooks of biology are carefully designed to clearly present reactions between organs as well as interactions within cells. Automatic generation of illustrative visualizations of biological networks is thus the technical content of this proposal. Automatic generation of hand-drawn illustrations has been a challenging task due to the difficulty of algorithmically describing a human creative process such as evaluating and selecting significant information and composing meaningful explanations in a visually plausible manner. The project also involves experts from several disciplines including network and medical visualization, data mining, systems biology as well as perceptual psychology. The result will provide a new direction for physiological process analysis and accelerate the knowledge transfer not only within experts but also to the public. Acknowledgment: The project has received funding from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 747985.


Horizon 2020 Marie Sklodowska-Curie Actions (MSCA) 747985
Contact: Hsiang-Yun WuORCID iD
1. December 2015 - 31. December 2018 Visual Information Foraging on the Desktop
The goal of this project is to design and develop novel interactive visualization techniques to support knowledge workers in making sense of their unstructured, dynamic information collections.
FWF T 752-N30
Contact: Manuela WaldnerORCID iD
no funding
Contact: Hannes KaufmannORCID iD
1. April 2018 - 31. October 2018 Presentation of virtual machines
no funding
Contact: Hannes KaufmannORCID iD
|◀ Results 1 - 50 of 96 ▶|
Download list as JSON (with referenced objects), CSV, Permalink