INF359 CA2 - Multimodal Rendering

by Paolo Angelelli, Springsemester 2009

Overview

Coregistered datasets of a Monkey head have been given, acquired respectively with CT, MRI and PET modalities. The task of the course assignment was to combine the datasets in a multimodal volume renderer in order to visualize the anatomy of soft tissue (MRI), the anatomy of hard tissue (CT), and brain metabolism (PET). 
To achieve this result the CT dataset has been segmented to define the cerebral cavity. The volume renderer has then been developed to compose the CT data outside of the cerebral cavity with the MRI data inside the cerebral cavity. This  solution allowed to better show the brain isosurfaces, by removing MRI data not belonging to the brain.

The PET data has been then used to outline high activity regions in the monkey brain  MRI data, by means of  luminance enhancement, hue and opacity.

To combine the visualization of skull anatomy of the moneky  with the brain without occluding the interesting regions in the MRI data, illustrative techniques have been adopted such as illustrative transparency and countour rendering.

Results:

The brain is rendered from MRI data according to the segmentation mask derived from the CT volume.

A) grayscale rendering, b) rendering with PET hue transfer function, and c) rendering of high intensity PET regions in the MRI data.

The brain rendering is combined with skull anatomy from CT data.

d) CT data is composed with MRI data along the ray. High transparency is used in the transfer function, but the skull rendering still occludes the brain.

e,f)Illustrative transparency is applied to CT data (but not where there is no brain present). The brain looks a bit floating in the skull, but is much crisper and clearer.

a

b

c

d

e

f

g

h

i

g,h,i) Countour rendering is added. It gives better spatial perception of the structures of interest

j

k

l

j,k,l) Through the adoption of clipping boxes a deeper view of the brain hot regions in the PET is now possible

m

n

o

m,n,o) Slices from the PET and MRI datasets are added to the multimodal volume rendering, and a thin slab of the brain is superimposed over the slices, combining in a single visualization raw data from the volume datasets with anatomy and brain activity information