3D rendering combined with ImageVision

3D rendering combined with ImageVision

Post by scott smi » Sat, 23 Mar 1996 04:00:00



Hello,

I've been looking at the ImageVision library for awhile
and am pleased with it's 2 dimensional image processing
capabilities.  I'm wondering if anyone has attempted to
extend the library into 3 dimensions and perform volume
rendering, especially in large 3D data sets?

It seems set up to do it, with the concept of z slices
within the ilImage class and the built in caching mechanism.
The major worry I have is:  If rays were cast through z in
the ilImage class, would there be enough cache misses as
slices were accessed, along the ray, to severely degrade
performance because of the necessary swapping?

Or you could possibly create a class which resamples the
slices onto a plane with an arbitrary orientation, and
use classes like ilMaxImg, for MIP, to composite the slices
as the plane was moved through the data.  This would
be similar to using texture mapping hardware for volume
rendering.

Also, has anyone attempted to combine ImageVision and
Inventor or Performer, for surface rendering?  i.e. use
the data from an ilImage, extract a surface, and hand it
to Inventor or Performer.

Any other ideas, or comments?

TIA, Scott


 
 
 

3D rendering combined with ImageVision

Post by Stefan Dachwi » Tue, 26 Mar 1996 04:00:00


: Hello,

: I've been looking at the ImageVision library for awhile
: and am pleased with it's 2 dimensional image processing
: capabilities.  I'm wondering if anyone has attempted to
: extend the library into 3 dimensions and perform volume
: rendering, especially in large 3D data sets?

: It seems set up to do it, with the concept of z slices
: within the ilImage class and the built in caching mechanism.
: The major worry I have is:  If rays were cast through z in
: the ilImage class, would there be enough cache misses as
: slices were accessed, along the ray, to severely degrade
: performance because of the necessary swapping?

: Or you could possibly create a class which resamples the
: slices onto a plane with an arbitrary orientation, and
: use classes like ilMaxImg, for MIP, to composite the slices
: as the plane was moved through the data.  This would
: be similar to using texture mapping hardware for volume
: rendering.

: Also, has anyone attempted to combine ImageVision and
: Inventor or Performer, for surface rendering?  i.e. use
: the data from an ilImage, extract a surface, and hand it
: to Inventor or Performer.

: Any other ideas, or comments?

: TIA, Scott


For those interested in an easy user interface to (a lot of) the
funcktionality ImageVision offers, take a look at our Ilab page:

http://www.cevis.uni-bremen.de/ilab/

It also includes a very nice volume renderer with special support
for Reality Engines and the Impact (included in the still to be
released ImageVision 3.0!!)

Nice stuff, we think...

Have a nice day...

--
Stefan Dachwitz


 
 
 

1. Combining Macintosh macines to render

Hi all

Is there any way to combine a number of G4 machines (about 4) to
render a single frame, its taking an age at the moment because were
using radiosity and the lighting is very complex. Lightwave seems to
be able to combine to render multiple but not single frames ???

Any help ideas, where to get an app would be great

Thanks

Paul

2. Scanning Sketches

3. How to combine volume rendering with POV?

4. JPEG tmp files/alchemy help

5. combining 2 renders

6. DDraw Mini DLL

7. combining 3 rendered avi's?

8. terrain texture generation

9. (Announcement) New Unified Authoring Tool combines 3D Worlds with vector graphics and animation

10. Can Mathematica combine surface and 3D Graphics??

11. **** 2D COMBINES WITH 3D *****

12. How can I combine 3D MAX animations ?

13. 3D Studio MAX problem COMBINE