Hi sorry to add this later but it struck me just now...
this is about finding the visibility sets given viewpoint and position
of the camera.
the idea is to exploit spatial coherence...
1)Take the scene bound it in a bounding sphere
2)Stand on the surface of a concentric sphere with infinite radius
3)take any random direction and find all the visible triangles.
4)move on the sphere.
5)use the prev calculated set of triangles to trivially reject all the
other triangles if possible.
6)If no triangle in the hidden set is visible we move on
7)otherwise recalculate the visible set again..we can use some
8) note that moving in the direction of view will only reduce the
number of visible triangles from the precalculated set. So we can
reuse the set to do depth wise visibility sector determination.
9) Its my hope that the number of visibility sectors formed although
they look infinite might actually be much smaller.
this is where the spatial coherency assumption is used.
These steps are done on preprocess time. We can then move around
during runtime to find use the visible set info to render.
10) There can be methods possible to delta encode the visibility
information for each sector for reducing memory requirements.
11) If error in scene is allowed then we need not make the new sector
when a new triangle is added but only if the projected area of the new
triangles add to be greater that a user defined tolerance.
These are thoughts that struck me..and in case people have already
worked on similar themes please do tell me so that i can refer those
Note that frustum culling and backface removal are inbuilt so no
special methods need to be employed..