>A simple method for fast ray tracing has occurred to me,
>and I haven't seen it in the literature, particularly
>Procedural Elements for Computer Graphics.
>It is a way to trivially reject rays that don't
>intersect with objects. It works for primary
>rays only (from the eye). It is:
>Do once for each object:
> compute its minimum 3D bounding box. Project
> the box's 8 corners unto pixel space. Surround the
> cluster of 8 pixel points with a minimum 2D bounding box.
> (a tighter bounding volume could be used).
>To test a ray against an object, check if the pixel
>through which the ray goes is in the object's 2D box.
>If not, reject it.
>It sure beats line-sphere minimum distance calculation.
>Surely this has been tried, hasn't it?
It's true, this really hasn't appeared in the literature, per se. However, it
has been done.
The idea of the item buffer has been presented by Hank Weghorst, Gary Hooper,
and Donald P. Greenberg in "Improved Computational Methods for Ray Tracing",
ACM TOG, Vol. 3, No. 1, January 1984, pages 52-69. Here they cast polygons
onto a z-buffer, storing the ID of the closest item for each pixel. During
ray tracing the z-buffer is then sampled for which items are probably hit
by the eye ray. These are checked, and if one is hit you're done. If none
are hit then a standard ray trace is performed. Incidentally, this is the
method Wavefront uses for eye rays when they perform ray tracing. It's
fairly useful, as Cornell's research found that there are usually more eye
rays than reflection and refraction rays combined. There's still all those
shadow rays, which was why I created the light buffer (but that's another
story...see IEEE CG&A September 1986 if you're interested).
In the paper the authors do not describe how to insert non-polygonal objects
into the buffer. In Weghorst's (and I assume Hooper's, too) thesis he
describes the process, which is essentially casting the bounding box onto
the screen and getting its x and y extents, then shooting rays within this
extent at the object as a pre-process. This is the idea you outlined.
However, theirs avoids all testing of the extents by doing the work as
a per object (instead of per ray) preprocess. A per object basis means they
don't have to test extents: all they do is loop through the extent itself and
shoot rays at the object for each pixel.
All for now,
Eric Haines (not John Saponara)
NOTE: this account is going away soon. My stable email address is: