INF359 assignment 2: Multimodal Rendering
By Michal Papež
20. May 2010
- Show the MIP and DVR of the PET activation
- Show the brain data with high activity in the context of the skull
- Show the brain 3D surface with mapped activity values
I've created a volumeshop plugin for multi-modal visualization using DVR, MIP and cutting-planes. The GLSL shader operates in two phases.
First it does DVR and traverses along the ray until it leaves a bone. Bone is recognized from CT data as it has higher density then a given treshold.
Then the ray is traversed in "soft-tissue" mode. In this mode, the rendering can be done using both MIP and DVR. The values used for voxel
coloring, shading (by computing gradients) and opacity can be all selected from different modalities (CT, MRI T1, MRI T2 and PET) and all
of the modalities may use their own transfer function.
These options gave the plugin enough capabilities to provide visualization for all the tasks.
The best modality for the brain shading proved to be MRI T1. For brain localization (segmentation), the above mentioned two phase approach was used
in combination with opacity transfer function. The good modalities to locate the brain in this way were PET and MRI T2.