|
Colloquy Cycle SS 2005
|
Current Schedule
|
In the sumer term of 2005 the following talks will be organized by our Institute. The talks are partially financed by the "Arbeitskreis Graphische Datenverarbeitung" of the OCG (Austrian Computer Society)
|
Date | Speaker | Title | Time | Location |
04.03.2005 |
Frans Gerritsen (Philips, The Netherlands)
wegen Reiseproblemen abgesagt |
|
10:00-11:00 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
09.03.2005 |
Matthias Teschner (University of Freiburg) |
Interacting Deformable Objects
|
16:00-17:00 s.t. | TechGate Vienna, TechGate room 3.1 |
11.03.2005 |
Denis Gracanin (Virginia Tech University) |
cluster computing and visualization
|
10:00-11:00 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
22.04.2005 |
Timo Aila (Helsinki University of Technology) |
Recent advances in physically- based soft shadows
|
10:30-11:30 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
29.04.2005 |
Dr. Christopher Giertsen (Christian Michelsen Research
Bergen, Norway) |
Immersive Virtual Reality for Oil Exploration and Production
|
10:30-11:30 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
04.05.2005 |
David Laidlaw (Brown University) |
To Spelunk or Not to Spelunk: Does Immersive Virtual Reality Help Science?
|
10:00-10:45 s.t. | TechGate Vienna, Donau-City-Strasse 1/3rd floor/gate 3, 1220 Vienna |
13.05.2005 |
Reinhard Klein (Universit?Bonn, Institut fr Informatik II, Germany)
|
Techniques for real-time high quality rendering of complex models
|
10:30-11:30 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
24.05.2005 |
Baoquan Chen (University of Minnesota, Minneapolis)
|
Visualizing Sparsely Scanned Outdoor Environments
|
12:00-13:00 s.t. | Seminarraum 122, Gu?ausstra? 27-29, neues EI, Stiege I, 3. Stock |
10.06.2005 |
Roman Durikovic (Comenius University, Slovakia)
|
Rendering trics and techniques: Japanese lacquer, glare effect, and paints
|
10:30-11:15 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
24.06.2005 |
Prof. Charles Hansen (University of Utah, USA)
|
Suppose the World was Piecewise Plastic?
|
10:30-11:30 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
05.09.2005 |
David Luebke (University of Virginia)
|
The future is not framed
|
14:00-15:00 s.t. | Seminarraum 186, Favoritenstraße 9, 5. Stock |
Interacting Deformable Objects
|
Matthias Teschner, University of Freiburg
The realistic simulation of complex deformable objects at interactive rates comprises a number of challenging problems,
including deformable modeling, collision detection, and collision response.
1. The deformable modeling approach has to provide interactive update rates, while guaranteeing a stable simulation.
Furthermore, the approach has to represent objects with varying elasto-mechanical properties.
2. The collision detection algorithm has to handle geometrically complex objects, and also large numbers of potentially
colliding objects. In particular, the algorithm has to consider the dynamic deformation of all objects.
3. The collision response method has to handle colliding and resting contacts among multiple deformable objects in a robust and
consistent way. The method has to consider the fact that only sampled collision information is available due to the discretized
object representations and the discrete-time simulation.
The presentation discusses solutions to the aforementioned simulation aspects. Interactive software demonstrations illustrate
all models, algorithms, and their potential for applications such as surgery simulation.
 
|
cluster computing and visualization
|
Denis Gracanin, Virginia Tech University
 
|
Recent advances in physically- based soft shadows
|
Timo Aila, Helsinki, Finland
This talk will cover two new algorithms for rendering physically-based soft shadows.
The first method replaces the hundreds of shadow rays commonly used in
stochastic ray tracers with a single shadow ray and a local
reconstruction of the visibility function. Compared to tracing the
shadow rays, our algorithm produces exactly the same image while
executing one to two orders of magnitude faster in the test scenes
used. Our first contribution is a two-stage method for quickly
determining the silhouette edges that overlap an area light source, as
seen from the point to be shaded. Secondly, we show that these partial
silhouettes of occluders, along with a single shadow ray, are
sufficient for reconstructing the visibility function between the
point and the light source.
The second method does not cast shadow rays. Instead, we place both
the points to be shaded and the samples of an area light source into
separate hierarchies, and compute hierarchically the shadows caused by
each occluding triangle. This yields an efficient algorithm with
memory requirements independent of the complexity of the scene.
 
|
Immersive Virtual Reality for Oil Exploration and Production
|
Dr. Christopher Giertsen, Bergen, Norway
The process of locating oil reserves and position new oil wells involves many complex data types and many professional disciplines.
The data sets are often extremely large, irregular, three-dimensional, dynamic, and may include many associated measured or
simulated parameters. It is a great challenge to visualize and analyze such data, particularly when data sets from different
disciplines need to be combined simultaneously, and be manipulated in real-time.
This talk presents an overview of a long-term research project, where the aim has been to make use of large screen
visualization and virtual reality interaction in order to improve critical oil company work processes. First,
the project idea and the most important data types will be described. Then, some of the new visualization methodology
and interaction techniques developed in the project will be reviewed. This also includes an outline of unsolved visualization
research issues. Finally, the business impact of the project results will be summarized.
 
|
To Spelunk or Not to Spelunk: Does Immersive Virtual Reality Help Science?
|
David Laidlaw, Brown University
The speaker will present the results of several experiments to evaluate visualization environments.
Together, the results help to explain some of the tradeoffs between large-format 3D virtual-reality displays
(e.g., a Cave) and other display formats. All of the results are motivated by the belief that immersive virtual
reality has the potential to accelerate the pace of scientific discovery for scientists studying large complicated 3D problems.
The results the speaker will present come from experiments, which represent a number of different approaches: first,
anecdotal reports about scientists using visualization applications; second, performance measurements of non-expert
subjects on abstracted tasks; third, evidence about the impact of the virtual environment on performance; and fourth,
subjective evaluations by visual design experts. As might be expected when asking which displays performed better, the
answer is it depends on the scientific application, on the tasks used in evaluations, and on the details of the display
technologies. The speaker will conclude with some thoughts on how the different evaluation approaches complement each
other to give a more complete picture.
 
|
Techniques for real-time high quality rendering of complex models
|
Prof. Reinhard Klein, Universität Bonn
Despite recent advances in finding effective LOD-Representations for gigantic 3D objects, rendering of complex,
gigabyte-sized models and environments is still a challenging task, especially under real-time constraints and high demands
on the visual accuracy. In the first part of this talk I will give an overview over our recent results on the simplification and efficient hybrid rendering of complex meshes and point clouds. After introducing the general hierarchical concept I will present two hybrid LOD algorithms for real-time rendering of complex models and environments. In the first approach we use points and triangles as the basic rendering primitives. To preserve the appearance of an object a special error measure for simplification was developed which allows us to steer the LOD generation in such a way that the geometric as well as the appearance deviation is bounded in image space. A novel hierarchical approach supports the efficient computation of the Hausdorff distance between the simplified and original mesh during simplification. In the second approach we refrain from using triangles in combination with points. Instead we replace most of the points by planes. Using these planes the filtering and therefore the rendering quality is comparable to elaborate point rendering methods but significantly faster since it is supported in hardware.
In the second part we concentrate on efficient GPU based rendering of Trimmed Non-Uniform Rational B-Spline surfaces (NURBS).
Due to the irregular mesh data structures required for trimming there were no algorithms that exploit the GPU for tessellation so far.
Instead, all recent approaches perform a pre-tessellation and use level-of-detail techniques in order to deal with complex
Trimmed NURBS models. In contrast to a simple API these methods require tedious preparation of the models before rendering.
In addition this pre-processing hinders interactive editing. With our new method the trimming region can be defined by a
trim-texture that is dynamically adapted to the required resolution and allows for an efficient trimming of surfaces on the GPU.
Combing this new method with a GPU-based tessellation of cubic rational surfaces allows a new rendering algorithm for arbitrary
trimmed NURBS and even T-Spline surfaces with prescribed error in screen space on the GPU. The performance exceeds current
CPU-based techniques by a factor of about 200 and makes real-time visualization of trimmed NURBS and T-Spline surfaces possible
on consumer-level graphics cards.
 
|
Visualizing Sparsely Scanned Outdoor Environments
|
Baoquan Chen, University of Minnesota at Twin Cities
Capturing and animating real-world scenes have attracted increasing
research interest. To offer unconstrained navigation of the scenes, 3D
representations are first needed. Advancement in laser scanning
technology is making 3D acquisition feasible for objects of ever larger
scales. However, outdoor environment scans demonstrate the following
properties: (1) incompleteness - a complete scan of every object in the
environment is impossible to obtain due to self- and inter-object
obstruction and constrained accessibility of the scanner; (2) complexity
- natural objects, such as trees and plants are complex in terms of
their geometric shapes; (3) inaccuracy - data can be unreliable due to
scanning hardware limitations and movement of objects, such as plants
and trees during the scanning process; and (4) large data size. These
properties raise unprecedented challenges for existing methods. In this
talk, I will describe our solutions towards addressing these challenges.
They fall into two directions of approach: the first one is artistic
abstraction and depiction of point clouds, and the second one is
constructing full geometry out of limited scans.
 
|
Suppose the World was Piecewise Plastic?
|
Professor Charles Hansen, University of Utah
Is it ridiculous to think of the world has nothing but plastic?
That is precisely an assumption most volume renders make by using
the Phong illumination model.
Direct volume rendering has proven to be an effective and flexible visualization method for
interactive exploration and analysis of 3D scalar fields. While widely used, most if not all
applications render (semi-transparent) surfaces lit by an approximation to the Phong local
surface shading model. This model renders surfaces simplistically (as plastic objects) and
does not provide sufficient lighting information for good spatial acuity. In fact, the constant
ambient term leads to misperception of information that limits the effectiveness of
visualizations. Furthermore, the Phong shading model was developed for surfaces, not
volumes. The model does not work well for volumetric media where sub-surface scattering
dominates the visual appearance (e.g. tissue, bone, marble, and atmospheric phenomena).
As a result, it is easy to miss interesting phenomena during data exploration and analysis.
Worse, these types of materials occur often in modeling and simulation of the physical world.
Physically correct lighting has been studied in the context of computer graphics where it has
been shown that the transport of light is computationally expensive for even simple scenes.
Yet, for visualization interactivity is necessary for effective understanding of the underlying
data. We seek increased insight into volumetric data through the use of more faithful
rendering methods that take into consideration the interaction of light with the volume itself.
 
|
The future is not framed
|
David Luebke, University of Virginia
The ultimate display will not show images. To drive the display of the
future, we must abandon our traditional concepts of pixels, and of
images as grids of coherent pixels, and of imagery as a sequence of
images.
So what is this ultimate display? One thing is obvious: the display of
the future will have incredibly high resolution. A typical monitor
today has 100 dpi-far below a satisfactory printer. Several
technologies offer the prospect of much higher resolutions; even today
you can buy a 300 dpi e-book. Accounting for hyperacuity, one can make
the argument that a "perfect" desktop-sized monitor would require
about 6000 dpi-call it 11 gigapixels. Even if we don't seek a perfect
monitor, we do want large displays. The very walls of our offices
should be active display surfaces, addressable to a resolution
comparable to or better than current monitors.
It's not just spatial resolution, either. We need higher temporal
resolution: hardcore gamers already use single buffering to reduce
delays. The human factors literature justifies this: even 15 ms of
delay can harm task performance. Exotic technologies (holographic,
autostereoscopic...) just increase the spatial, temporal, and
directional resolution required. Suppose we settle for 1 gigapixel
displays that can refresh at 240 Hz-roughly 4000x typical display
bandwidths today. Recomputing and refreshing every pixel every
time is a Bad Idea, for power and thermal reasons if nothing else.
We will present an alternative: discard the frame. Send the display
streams of samples (location+color) instead of sequences of images.
Build hardware into the display to buffer and reconstruct images from
these samples. Exploit temporal coherence: send samples less often
where imagery is changing slowly. Exploit spatial coherence: send
fewer samples where imagery is low-frequency. Without the rigid
sampling patterns of framed renderers,sampling and reconstruction can
adapt with very fine granularity to spatio-temporal image change.
Sampling uses closed-loop feedback to guide sampling toward edges or
motion in the image. A temporally deep buffer stores all the samples
created over a short time interval for use in reconstruction.
Reconstruction responds both to sampling density and spatio-temporal
color gradients. We argue that this will reduce bandwidth requirements
by 1-2 orders of magnitude, and show results from our preliminary
experiments.
 
|
|