> I'm in the process of writing my own raytracer, and have run up against a
> lack of easily google-able algorithms for ray-object intersection. I got the
> bounding-box, sphere and polygon algorithms from a chapter in Glassner's
> book, but there must be a reference for bilinear/bicubic patches, NURBS,
> subdivision surfaces, maybe even those lightweight point and curve
> primitives that RenderMan has. What's the best reference for this?
For traditional tensor-product patches (NURBS and all
subsets/equivalents), I currently favor the Bezier clipping approach
of Nishita, Sederberg, et al. presented at SIGGRAPH 90. A brief
description of the algorithm is on the Web at
That said, the general consensus is that tesselation is more
efficient: simply tile the surface with polygons and intersect
those. This gets sticky with trimmed surfaces, as tesselation gets
harder to do right, and can run into problems where the ray hits the
front side of a polygon, but the interpolated normal faces away from
Quote:> Also, I'm currently using a linked-list for my geometric database. This
> can't possibly be a sane way to implement it; I've been told that I should
> use a BSP-tree, but does this mean I have to split my primitives? (I'd
> prefer not to have yet another operation that needs to be implemented for
> all primitives.) If not a BSP-tree, what do people generally use for their
> geometric databases in a modern raytracer, and where can I find a thorough
> description of it?
People use all sorts of space subdivision schemes for
acceleration. You don't have to split primitives to do this; just
include the primitive in every region that it sticks into. What I know
of these schemes is:
1. Hierarchical bounding volumes. Bound each primitive with some
simple primitive (bounding box or sphere), then lump these together
and compute bounding volumes. Repeat, getting a tree structure where
(we hope) most of the model can be ruled out early.
2. Regular grid subdivision. Find the bounding box of the entire
model, then chop it up into a predetermined number of cells. This
is a very easy structure to step through, but it will be
inefficient if many of your primitives are lumped close
together. Many use a couple of levels of this: divide once, then
take the densest cells and chop each one into a predetermined
number of cells.
3. BSP, kd-tree, or octree. This takes the whole volume of the model
and chops it in a predetermined fashion. For an octree, just divide
in half along each axis, getting 8 child cells. Sort the primitives
into each of these and keep subdividing adaptively until you either
have only a small number of primitives in each cell or you run out
of memory. The kd-tree is similar, but splits along one axis at a
time, trying to select a plane that divides the cell evenly to keep
the tree balanced. I think BSP-tree methods are similar.
4. 5D ray classification. This does a similar adaptive subdivision,
but in 5 dimensions (3 Cartesian coordinates in space, 2 for ray
direction). BUt there's an important difference: it subdivides in a
lazy fashion, so there's no preprocessing step and unused parts of
the space (which is most of the 5D space) never get subdivided. It
was presented by Jim Arvo and Dave Kirk at SIGGRAPH 87.
Those are all I can think of; you probably want to browse through old
issues of the Ray Tracing News, available at
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.