Ray tracing: ray-object intersection

Ray tracing: ray-object intersection

A simple method for fast ray tracing has occurred to me,
and I haven't seen it in the literature, particularly
Procedural Elements for Computer Graphics.
It is a way to trivially reject rays that don't
intersect with objects. It works for primary
rays only (from the eye).  It is:

Do once for each object:
compute its minimum 3D bounding box. Project
the box's 8 corners unto pixel space. Surround the
cluster of 8 pixel points with a minimum 2D bounding box.
(a tighter bounding volume could be used).

To test a ray against an object, check if the pixel
through which the ray goes is in the object's 2D box.
If not, reject it.

It sure beats line-sphere minimum distance calculation.

Surely this has been tried, hasn't it?

--
The ultimate realization: everything is everything. (I should know.)
Jack Ritter, S/W Eng. Versatec, 2805 Bowers Av, Santa Clara, CA 95051
Mail Stop 1-7.  (408)982-4332, or (408)988-2800 X 5743
UUCP: {pyramid,mips,vsi1,arisia}!versatc!ritter

Ray tracing: ray-object intersection

>A simple method for fast ray tracing has occurred to me,
>and I haven't seen it in the literature, particularly
>Procedural Elements for Computer Graphics.
>It is a way to trivially reject rays that don't
>intersect with objects. It works for primary
>rays only (from the eye).  It is:

^^^^^^^^^^^^^^^^^^^^^^^^

Unfortunately, that's the problem.  You also need to intersect rays from
each of the light sources, which adds a lot of storage with your approach.
And in ray tracing a "typical" scene, you are going to have some transmission
and a lot of reflection for secondary rays which leads to the "primary rays
from the eye" being only a minor part of the overall calculation.  Trying to
do some decent diffusive lighting leads to even more secondary rays....

The nice thing about a sphere is that it works uniformly for all rays, and
is fairly simple.

(If I got anything wrong, flame the net, not me. :-)

--Charles

Ray tracing: ray-object intersection

[lots of stuff deleted]

Quote:

>Surely this has been tried, hasn't it?

Yes.  Lots of times.

Ray tracing: ray-object intersection

>A simple method for fast ray tracing has occurred to me,
>and I haven't seen it in the literature, particularly
>Procedural Elements for Computer Graphics.
>It is a way to trivially reject rays that don't
>intersect with objects. It works for primary
>rays only (from the eye).  It is:

>Do once for each object:
>   compute its minimum 3D bounding box. Project
>   the box's 8 corners unto pixel space. Surround the
>   cluster of 8 pixel points with a minimum 2D bounding box.
>   (a tighter bounding volume could be used).

>To test a ray against an object, check if the pixel
>through which the ray goes is in the object's 2D box.
>If not, reject it.

>It sure beats line-sphere minimum distance calculation.

>Surely this has been tried, hasn't it?

It's true, this really hasn't appeared in the literature, per se.  However, it
has been done.

The idea of the item buffer has been presented by Hank Weghorst, Gary Hooper,
and Donald P. Greenberg in "Improved Computational Methods for Ray Tracing",
ACM TOG, Vol. 3, No. 1, January 1984, pages 52-69.  Here they cast polygons
onto a z-buffer, storing the ID of the closest item for each pixel.  During
ray tracing the z-buffer is then sampled for which items are probably hit
by the eye ray.  These are checked, and if one is hit you're done.  If none
are hit then a standard ray trace is performed.  Incidentally, this is the
method Wavefront uses for eye rays when they perform ray tracing.  It's
fairly useful, as Cornell's research found that there are usually more eye
rays than reflection and refraction rays combined.  There's still all those
shadow rays, which was why I created the light buffer (but that's another
story...see IEEE CG&A September 1986 if you're interested).

In the paper the authors do not describe how to insert non-polygonal objects
into the buffer.  In Weghorst's (and I assume Hooper's, too) thesis he
describes the process, which is essentially casting the bounding box onto
the screen and getting its x and y extents, then shooting rays within this
extent at the object as a pre-process.  This is the idea you outlined.
However, theirs avoids all testing of the extents by doing the work as
a per object (instead of per ray) preprocess.  A per object basis means they
don't have to test extents: all they do is loop through the extent itself and
shoot rays at the object for each pixel.

All for now,

Eric Haines (not John Saponara)

NOTE: this account is going away soon.  My stable email address is:

I'm certain this is a FAQ, but I can't find a comprehensive reference on
this subject in any of the standard raytracing websites.  I am working on a
ray tracer, and would like to add more primitives.  However, in order to to
that, I need to be able to perform intersection and normal calculations on
the primitives.  Are there any websites or public documents that have these
kinds of equations or at least a summary of the methods (I can work out the
algorithm myself) for a wide variety of primitives (listed below)?  I am of
course looking for:
1) Calculation of the intersection point with a line and a primitive
and
2) Calculation of the normal at a given point on a primitive

I am working on the following primitives for this ray tracer:
plane/polygon (implemented)
sphere (implemented)
cube/rectangular parallelepiped (implemented)
cylinder/conic (I think I have this one worked out)
general quadrics (the intersections are easy, but I haven't worked out
normals for all)
b-spline/quadric patches (haven't even started yet)
torus (haven't started yet)
heightfield (there have to be optimizations over using multiple
polygons)

Since I don't read this newsgroup daily, please cc: any replies to