VolRendYo!
Description
VolRendYo! is a program for visualization of volume data. It features a depth-of-field effect for a slice-based direct volume renderer. This approach was proposed by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. This program is meant to demonstrate that a depth-of-field effect helps the users to better perceive what is in front of what in volume renderings.
Downloads
Executable
Source (Visual C++ 2015)
Doxygen Documentation
Website
How to start
Edit the "start.bat" to set your resolution, monitor refresh rate, full-screen state, and dataset index. Then double click the "start.bat". Or start the volrendyo.exe directly to use the default settings. With a resolution of 1024x768, the NVidia GeForce 780GTX reaches approx. 20 fps.
If it says that DLLs are missing, install the Visual C++ 2015 Redistributable Packages and / or DirectX End-User Runtimes.
How to build
- Open solution with Visual Studio 2015.
- Select normal release mode. (debug takes a while to preprocess the normals, other build are not set-up)
- Build
- Run in VS or start with batch file in release folder
- You can choose a volume data set per command line parameter
Controls
Steer camera with WASD + Space + Ctrl + Mouse + shift for booster.
Implementation details
Libraries
- We used GLM for all the math stuff with vectors and matrices.
- We used OpenGl with GLEW for all the OpenGl extension.
- We used GLFW for input handling, window stuff and OpenGl initialization.
- We used FMOD to play the music.
Volume rendering
We re-use the slice-based volume renderer from the previous project. It bases on the GPU Gems Chapter 39 including the shadowing technique. Instead of sheep wool or clouds, it is now used for downloaded volume data from industrial or medical CT scans.
Important code parts
The important newly implemented parts for visualization 2 are:- ...src\resources\present246x246x221.dat etc. 3D volume data
- ...src\resources\textures\transfer_function.png 2D transfer function
- ...src\resources\shaders\VolumeRendering.fs Volume rendering shader
- ...src\broken_magic\src\Engine\VolumeRenderingPipeline.cpp Volume slicing code
- ...src\broken_magic\src\Engine\VolumeTexture.cpp OpenGl 3D texture and data loader
- ...src\broken_magic\src\Engine\VolumetricObject.cpp Volumetric scene graph object
- ...src\broken_magic\src\Game\BMGameManager.cpp Creates and controls the scene
- ...src\broken_magic\src\Engine\MagicEngineMain.cpp Engine main class, controls all pipelines
- ...src\broken_magic\src\Engine\Camera.cpp Scene graph object representing the camera
- ...src\broken_magic\src\Engine\FrameBuffer.cpp OpenGl FBO wrapper
- ...src\broken_magic\src\Engine\GameObject.cpp Base class for all scene graph objects
- ...src\broken_magic\src\Engine\IRenderPipeline.cpp Interface for rendering pipelines
- ...src\broken_magic\src\Engine\IGameManager.cpp Interface for game managers
- ...src\broken_magic\src\Engine\FrameBuffer.cpp OpenGl FBO wrapper
- ...src\broken_magic\src\Engine\Light.cpp Scene graph object representing a light
- ...src\broken_magic\src\Engine\GLSLProgram.cpp OpenGl shader program wrapper
Preprocessing
The program can load some pre-defined volume data sets. Only pre-defined because some data sets have a header containing the resolution and bit depth, others lack any header. Our program loads the binary information from the file into the main memory. There, it converts the data to a data usable by the graphics card. I.e. bit shift the usually 14 used bit in 16 bit from the file to 8 bit. Then, it calculates the gradient (normals) for each voxel, which is filtered with a simple box filter afterwards. The density information and the gradient are then sent to the graphics memory as a 3D texture with 4 components. The program creates a scene graph consisting of a node for the volumetric object, a circling light source, and a moveable camera.
Rendering
Our renderer implements the process described in the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. For the slice-based rendering, we use quads as proxy geometry. These quads are rotated to face the camera. For each fragment of the slices, we sample the 3D-texture to get the density and normal of the corresponding voxel. Then, we use this information as input for the transfer function, a 2D texture. The result from the transfer function is the fragment's color. We have two stacks of slices: one rendered back-to-front and one rendered front-to-back. The first uses the over operator to blend its fragments with the colors behind. The second uses the under operator. The results of both stacks are blended and displayed.
To achieve the depth-of-field effect, we don't just blend the slice's fragments with the fragments of the previous slice. Instead, we sample multiple times in a certain circle of confusion on the previous slice. The radius of this circle of confusion is determined by the slice's distance to the focus slice and the strength of the depth-of-field effect.
About the project
We created this demo during the visualization 2 lecture of our studies at the TU Wien. We re-used parts from our engines for the real-time graphics exercise. The depth-of-field effect bases on the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro.
Credits
- Programming: Philipp Erler, Robin Melán
- Volume Data: christmas present and stag beetle from TU Wien, backpack from Uni Tübingen