I've been looking at the ImageVision library for awhile
and am pleased with it's 2 dimensional image processing
capabilities. I'm wondering if anyone has attempted to
extend the library into 3 dimensions and perform volume
rendering, especially in large 3D data sets?
It seems set up to do it, with the concept of z slices
within the ilImage class and the built in caching mechanism.
The major worry I have is: If rays were cast through z in
the ilImage class, would there be enough cache misses as
slices were accessed, along the ray, to severely degrade
performance because of the necessary swapping?
Or you could possibly create a class which resamples the
slices onto a plane with an arbitrary orientation, and
use classes like ilMaxImg, for MIP, to composite the slices
as the plane was moved through the data. This would
be similar to using texture mapping hardware for volume
Also, has anyone attempted to combine ImageVision and
Inventor or Performer, for surface rendering? i.e. use
the data from an ilImage, extract a surface, and hand it
to Inventor or Performer.
Any other ideas, or comments?