INF359 assignment 2: Multimodal Rendering

By Michal Papež
20. May 2010


Tasks

Implementation

I've created a volumeshop plugin for multi-modal visualization using DVR, MIP and cutting-planes. The GLSL shader operates in two phases. First it does DVR and traverses along the ray until it leaves a bone. Bone is recognized from CT data as it has higher density then a given treshold. Then the ray is traversed in "soft-tissue" mode. In this mode, the rendering can be done using both MIP and DVR. The values used for voxel coloring, shading (by computing gradients) and opacity can be all selected from different modalities (CT, MRI T1, MRI T2 and PET) and all of the modalities may use their own transfer function.

These options gave the plugin enough capabilities to provide visualization for all the tasks.

The best modality for the brain shading proved to be MRI T1. For brain localization (segmentation), the above mentioned two phase approach was used in combination with opacity transfer function. The good modalities to locate the brain in this way were PET and MRI T2.

Results

Brain surface


DVR of the brain. Coloring MRI T2, shading MRI T1, opacity: PET. Skull is rendered with DVR, transfer function is set to very low opacity values.


DVR of the brain with mapped activity from PET. Hue stored in PET dataset was used for coloring. Shading was done based on MRI T1, opacity (brain localization by transfer function) was taken from MRI T2.

Brain activity

MIP of the brain with mapped activity from PET. Coloring was done based on the PET data with custom transfer function. Shading was done based on MRI T1, opacity (values used by MIP) was taken from PET again by transfer function.








Cutaway view done with the same configuration as above.