I'm working on a volume data visualization program using Inventor.
The byte scaled data lies on a spherical grid (r, theta, phi). For
every radius r_i, I would like to create a texture which I then map
onto a SoSphere of radius r_i. By building up a scene consisting of a
large number of concentric spheres with suitable transparency, I hope
to be able to visualize three dimensional structures in the data.
It's possible to do volume visualization at interactive rates with
certain hardware, specifically RE, RE^2, and Impact. If you have
a Reality Engine, it's pretty important to fit everything into texture
memory. Thats 4 Megs if you have RM4 raster managers and 16 Megs if
you have RM5s (try /usr/gfx/gfxinfo). There is a good example of this
on one of the SGI ftp sites, don't remember the location offhand. The
basic idea is to bind a 3D texture (say 128x128x128), then render
a series of big flat polygons parallel to screen space from back to
front. Then blending works correctly, and since you use 3D texture
coordinates, you can use the texture matrix to interactively transform
You may want to use this approach (hunt around SGI ftp sites for the
actual app) and sample your data to go from spherical coordinates to
cartesian coordinates. If you do the rendering in spherical coordinates
with textures on a series of concentric spheres, you have to be careful
to get the blending right. You should render from back to front, so you'll
probably want to split the spheres into hemispheres parallel to screen
space and render those in the correct order. If you use opengl, using
display lists will get optimal speed when binding lots-o-textures.
-Dirk Van Gelder
Engineering Animation, Inc.