Programming documentation
About this document
This document is meant as a short introduction into the application's internals. It should be an easy and short read for those who'd like to modify/enhance the application.
The application is written using the Qt library, OpenGL and uses Cg shader language for it's shaders. MSVC 2008 has been used as the development platform, and the application has been successfully tested under Windows XP with a GeForce 9300M GS card.
User interface
The whole user interface is implemented using Qt. The following Qt-widget derived controls are the backbone of the user interface
- TransferFunctionDisplayWidget - displays the transfer function.
- TransferControlWidget - allows manipulation of the transfer function
- PlaneInputWidget - a widget that allows the user to choose the orientation of the widget plane.
- PlaneControlWidget - contains an instance of PlaneInputWidget and a few checkboxes.
- ColorSelectionWidget - a slightly modified version of the default color selection widget.
- ControlWidget - encapsulates PlaneControlWidget and TransferControlWidget.
The RendererWidget - an OpenGL-enabled control - could also be considered part of the UI. It is used for all subsequent rendering.
A dropdown menu is present at the top of the window.
Data structures
Histogram is a class that encapsulates the dataset's intensity values, calculating the respective density for each of these intensities. It is primarily used for GUI. The Settings class houses a part of the applications state, for example the slicing settings. Other state variables are spread out over the whole application in their respective classes.The dataset is loaded into a 3d 16-bit luminance texture, from which it is sampled when necessary.
Slicing
Slicing is possible for both the main planes (XY, YZ, ZX) as well as for arbitrary planes (specified by their normal and offset). The dataset (a non-regular cube) is analytically intersected by the plane by intersecting each of the cube's edges with the plane, storing all the resulting points in an intermediate buffer. As a result, we get a set of points that lie on the plane, but that also define the shape of the resulting slice. First off, we discard three special cases: no intersection point (obvisouly, nothing to be seen), one intersection point (a single point, still not much to be seen), two intersection points (a line, still not very useful). For cases of three or more points, we calculate the covnex combination of those points (equal weights for each of the points). Since we can be sure this point lies inside of the (convex) polygon defined by those points, we order them counter-clockwise and create a triangle fan out of them. On this triangle fan, we render our slices (texture coordinates are the same as the intersection coordinates with our non-regular cube, scaled to the [0,1] range. This method is of course able to handle both the arbitrary and main planes cases.
By default, the slicing is done on the luminance data. There is also the possibility to color the slices; if enabled, a shader program reads in the transfer function and assigns alpha/color values to each of the pixels based on it's intensity.
Volumetric rendering
Only orthographic projection is supported. The camera is centered around the origin (as well as the datased), and is rotated around it using the trackball method. The camera can also be translated sideways (along the camera's up and side vectors), but only so far so that it doesn't get further than a certain threshold from the origin. After that, it rotates around this new centerpoint!
Other than that, three algorithms are supported, back-to-front and front-to-back, and the experimental (not really working) MIP algorithm.
The back-to-front algorithm finds the front intersection point (it can be deduced from the texture coordinate), and finds a position a safe distance from it in the direction of the camera's viewing direction (refered to as the forward vector). From there, it starts ray marching in the direction opposite to the camera's viewing direction. In each step, the current and the previous values are blended together. All steps outside of the dataset are discarded. This algorithm has no early exit ability and runs slower than the front-to-back one. There's no lighting implemented for the back-to-front case.
The front-to-back algorithm finds the front intersection point and starts ray-marching in the direction of the camera's front vector. At each step, the dataset is sampled and values are blended together. The alpha value is accumulated, and if we step outside of the dataset or if the alpha value is high enough (0.95 in our case), we exit early. There is a lighting model implemented for this case.
Possible improvements
- Perspective projection - would require a slight change in the shaders.
- Better camera mode - would require a revamp of the way transformations are done