looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by petit gromi » Fri, 10 Jan 2003 06:08:09



Thanks to all the people who answered my previous message.
But I have to be more specific: I'm looking for a model to render
without any trick.
I want all the triangles described individually in memory, without
sharing of indentical objetcs.
My goal is to test a parallel renderer that is developed in my lab.
So, I need to know how many triangle is a really big model?
Is our renderer able to raytrace a model that isn't raytracable
with other renderers?
Additionally, if one of you could give me such a big model,
it will be really helpful (under NDA if necessary).

Thanks for your time,
Chris

 
 
 

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by Mike William » Fri, 10 Jan 2003 14:17:03



Quote:>Thanks to all the people who answered my previous message.
>But I have to be more specific: I'm looking for a model to render
>without any trick.
>I want all the triangles described individually in memory, without
>sharing of indentical objetcs.
>My goal is to test a parallel renderer that is developed in my lab.
>So, I need to know how many triangle is a really big model?
>Is our renderer able to raytrace a model that isn't raytracable
>with other renderers?
>Additionally, if one of you could give me such a big model,
>it will be really helpful (under NDA if necessary).

I could, theoretically, convert my 20 billion polygon scene into one
which has the polygons stored separately. One slight problem is that the
file size is going to be a little on the large side - I estimate about
1.7 Terrabytes in Wavefront OBJ format. The polygons in the scene are
not all triangles, by the way - if your new renderer can only do
triangles, then you'd have to triangularize it first.

[I looked at a typical OBJ file that had 56325 polygons. It was 4745kb.
So that's over 86 bytes per polygon, so a 20 billion polygon file would
be about 1.7 trillion bytes.]

I could perhaps produce a small subset of "The Corporation" scene with
just enough polygons to fill a CD. I reckon that would be about 6
million polygons.

--
Mike Williams
Gentleman of Leisure

 
 
 

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by Mark VandeWetterin » Sat, 11 Jan 2003 14:41:38



> Thanks to all the people who answered my previous message.  But I have
> to be more specific: I'm looking for a model to render without any
> trick.  I want all the triangles described individually in memory,
> without sharing of indentical objetcs.  My goal is to test a parallel
> renderer that is developed in my lab.  So, I need to know how many
> triangle is a really big model?  Is our renderer able to raytrace a
> model that isn't raytracable with other renderers?  Additionally, if
> one of you could give me such a big model, it will be really helpful
> (under NDA if necessary).

It really is mostly about memory.  If your model fits in the CPU's cache,
it is small.  If it fits in main memory, you might think it could be called
big.  If it is many times the size of main memory, then it should rightly be
called "big".  

If a triangle is specified by three three-tuples of single precision.
floating point, each triangle is 36 bytes.  That means you get about
28 million of them to the gigabyte.  Most vertices of course will be
shared in dense triangle meshes, so you'll probably get a few more per
gigabyte, but in any realistic renderer you'll have color and texture
coordinate overheads per vertex as well, so perhaps it isn't a bad
ballpark.  
        Mark

Quote:> Thanks for your time, Chris

 
 
 

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by Matt Phar » Sun, 12 Jan 2003 06:26:24




>> Thanks to all the people who answered my previous message.  But I have
>> to be more specific: I'm looking for a model to render without any
>> trick.  I want all the triangles described individually in memory,
>> without sharing of indentical objetcs.  My goal is to test a parallel
>> renderer that is developed in my lab.  So, I need to know how many
>> triangle is a really big model?  Is our renderer able to raytrace a
>> model that isn't raytracable with other renderers?  Additionally, if
>> one of you could give me such a big model, it will be really helpful
>> (under NDA if necessary).

> It really is mostly about memory.  If your model fits in the CPU's cache,
> it is small.  If it fits in main memory, you might think it could be called
> big.  If it is many times the size of main memory, then it should rightly be
> called "big".  

> If a triangle is specified by three three-tuples of single precision.
> floating point, each triangle is 36 bytes.  That means you get about
> 28 million of them to the gigabyte.  Most vertices of course will be
> shared in dense triangle meshes, so you'll probably get a few more per
> gigabyte, but in any realistic renderer you'll have color and texture
> coordinate overheads per vertex as well, so perhaps it isn't a bad
> ballpark.  

I spent a fair amount of time some years ago trying to optimize a
ray-tracer for minimal memory use.  Using all the tricks I was able to come
up with, I was able to get it down to a total of 40 bytes per triangle.
This includes *all* overhead, including that triangle's share of ray
acceleration data structure size, etc--in other words:

total memory use of ray-tracer / total number of triangles ~= 40.

The key things to do to make this happen included sharing
vertices/normals/texture coordinates, as Mark noted above, and using
Michael Deering's technique for encoding normal vectors in a small number
of bits.  Obviously surface shaders and transformations weren't stored
per-triangle, but per collection-of-triangles that had the same
shader/transformation.  Finally, rather than using 3 ints to store vertex
indices, I used chars, shorts, or ints, depending on how many vertices
there were in a collection of triangles.

The resulting ray-tracer rendered a scene with 46 million triangles in
about 300MB of memory, and one with 9.6 million triangles in 50MB, without
using any instancing/shared geometry.  Obviously not all of those
primitives were ever in memory at once--they key was to develop some
techniques so that the renderer didn't need to have the whole scene in
memory at once.  See the following for all of the details...

<http://graphics.stanford.edu/papers/coherentrt/>

-matt
--

=======================================================================
In a cruel and evil world, being cynical can allow you to get some
entertainment out of it. --Daniel Waters

 
 
 

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by Mark VandeWetterin » Sun, 12 Jan 2003 07:37:11





>>> Thanks to all the people who answered my previous message.  But I have
>>> to be more specific: I'm looking for a model to render without any
>>> trick.  I want all the triangles described individually in memory,
>>> without sharing of indentical objetcs.  My goal is to test a parallel
>>> renderer that is developed in my lab.  So, I need to know how many
>>> triangle is a really big model?  Is our renderer able to raytrace a
>>> model that isn't raytracable with other renderers?  Additionally, if
>>> one of you could give me such a big model, it will be really helpful
>>> (under NDA if necessary).

>> It really is mostly about memory.  If your model fits in the CPU's cache,
>> it is small.  If it fits in main memory, you might think it could be called
>> big.  If it is many times the size of main memory, then it should rightly be
>> called "big".  

>> If a triangle is specified by three three-tuples of single precision.
>> floating point, each triangle is 36 bytes.  That means you get about
>> 28 million of them to the gigabyte.  Most vertices of course will be
>> shared in dense triangle meshes, so you'll probably get a few more per
>> gigabyte, but in any realistic renderer you'll have color and texture
>> coordinate overheads per vertex as well, so perhaps it isn't a bad
>> ballpark.  

> I spent a fair amount of time some years ago trying to optimize a
> ray-tracer for minimal memory use.  Using all the tricks I was able to come
> up with, I was able to get it down to a total of 40 bytes per triangle.
> This includes *all* overhead, including that triangle's share of ray
> acceleration data structure size, etc--in other words:

> total memory use of ray-tracer / total number of triangles ~= 40.

> The key things to do to make this happen included sharing
> vertices/normals/texture coordinates, as Mark noted above, and using
> Michael Deering's technique for encoding normal vectors in a small number
> of bits.  Obviously surface shaders and transformations weren't stored
> per-triangle, but per collection-of-triangles that had the same
> shader/transformation.  Finally, rather than using 3 ints to store vertex
> indices, I used chars, shorts, or ints, depending on how many vertices
> there were in a collection of triangles.

> The resulting ray-tracer rendered a scene with 46 million triangles in
> about 300MB of memory, and one with 9.6 million triangles in 50MB, without
> using any instancing/shared geometry.  Obviously not all of those
> primitives were ever in memory at once--they key was to develop some
> techniques so that the renderer didn't need to have the whole scene in
> memory at once.  See the following for all of the details...

><http://graphics.stanford.edu/papers/coherentrt/>

As an addendum just let me add that Matt and company's work on memory
coherent raytracing is a must read if you are interested in raytracing
Really Big (tm) scenes.  It's quite a nice bit of work, and has substantially
influenced my own thinking about developing raytracers.  Many, many good
ideas.

        Mark

- Show quoted text -

Quote:> -matt

 
 
 

looking for reference to the bigger model ever WITH EACH TRIANGLE DESCRIBED INDIVIDUALLY IN MEMORY

Post by Koji Nakamar » Tue, 14 Jan 2003 06:29:13



Quote:> Thanks to all the people who answered my previous message.
> But I have to be more specific: I'm looking for a model to render
> without any trick.
> I want all the triangles described individually in memory, without
> sharing of indentical objetcs.
> My goal is to test a parallel renderer that is developed in my lab.
> So, I need to know how many triangle is a really big model?
> Is our renderer able to raytrace a model that isn't raytracable
> with other renderers?
> Additionally, if one of you could give me such a big model,
> it will be really helpful (under NDA if necessary).

There is another approach for dealing with massive data --
breadth-first ray tracing. Please refer the followings:

http://www.informatik.uni-freiburg.de/tr/1990/Report21/
http://www.computer.org/tvcg/tg1997/v0316abs.htm
http://www.acm.org/jgt/papers/NakamaruOhno01/

It is true that shrinking data itself is very important. If you want
to compare your renderer with others, however, it is good to use SPD
(http://www.acm.org/tog/resources/SPD/overview.html) *without* such
shrinking, because many previous researches perhaps didn't utilize
sophisticated techniques described by Pharr.

On the other hand, if your renderer is similar to the one of RTRT
(http://graphics.cs.uni-sb.de/RTRT/), you should consider shrinking
techniques again. These sorts of techniques are crucial for such
researches. I think persons working on RTRT would provide their
dataset if you contact them.
--
Koji Nakamaru