Rman Benchmarking

Rman Benchmarking

Post by Daniel McFarlan » Thu, 15 Feb 2001 11:38:35



Hi all,

Some if not most of you have seen the Maya rendering tests on different
machines before:

http://www.highend3d.com/tests/maya/testcenter/

I thought it would be great to see the same done for PRman. Will the
Highend administrator  is also interested in doing the same for other
packages, including MTOR and PRman. (the post)

http://www.highend3d.com/boards/showflat.php?Cat=1,2&Board=hardware&N...

The question is does anyone have a Mtor and/ or Rib file that is fairly
representative of a production rendered scene that could be used for the
bassis of the tests? Would  the Cornell Box mentioned on this list, be
suitable?

Thanks,
Dan

 
 
 

Rman Benchmarking

Post by Larry Gri » Thu, 15 Feb 2001 13:19:26




Quote:>The question is does anyone have a Mtor and/ or Rib file that is fairly
>representative of a production rendered scene that could be used for the
>bassis of the tests? Would  the Cornell Box mentioned on this list, be
>suitable?

Definitely not!  The Cornell Box scene -- which, let's face it, is
basically only a couple dozen polygons -- is so simple as to be
extremely misleading in any benchmarks.

Creating renderer benchmarks is more art than science, and a rather
tricky art at that.  It's very easy for any contrived test to be very
uncharacteristic of real scenes.  I think that the only good
benchmarks are suites of several actual scenes from real productions,
and even then experience shows that a benchmark of scenes from one
show may be misleading when extrapolating to performance of rendering
frames from a different show.  Most unfortunately, nobody who makes real
productions will let their scenes out of the house for people to
benchmark with.  Even if they did, real scenes are quite big and
unwieldy.

I'd love to see a set of publicly available, fairly complex scenes
that could be used as general benchmarks.  Constructing those in such
a way as to make them truly representative is an extremely difficult
task.

        -- lg

--
Larry Gritz                                     Exluna


 
 
 

Rman Benchmarking

Post by Will » Fri, 16 Feb 2001 07:34:04



writes:

Quote:

>The question is does anyone have a Mtor and/ or Rib file that is fairly
>representative of a production rendered scene that could be used for the
>bassis of the tests? Would  the Cornell Box mentioned on this list, be
>suitable?

>Thanks,
>Dan

I would think that the cornell Box isn't a very realistic benchmark.  It is
really very uncomplicated compared to the stuff most of us render.  (I start at
about 5 meg RIB's for my hobbyist work, and some studios have multi-gigabyte
RIBs...)

perhaps a small RI API program could be made that would be small enough to
conveniently distribute, but would be able to spit out a 50 or 500 meg rib
file.  (I'm not volunteering -- I don't even have prman to participate in the
benchmarking...)

a C program would be portable as long as a renderman client library could be
found, and it could be a lot more universal than a maya scene file.  (since not
everybody has maya...)
---------------------------
"I /want/ Moofeus!"
-The V E C T O R-

 
 
 

Rman Benchmarking

Post by Frederic Drag » Fri, 16 Feb 2001 21:31:55


The project I made for my master is available for download.
It is fairly well suited to test global illumination, and should be complicated
enough for benchmarking; if someone could generate a rib file from the Max or DXF
files, it is freely available.
http://wwwsv1.u-aizu.ac.jp/labs/csel/atrium2/
http://www.mpi-sb.mpg.de/resources/atrium/

Frederic Drago
Universite of Aizu

 
 
 

Rman Benchmarking

Post by Martin F » Sat, 17 Feb 2001 02:55:51


That's very kind of you to offer the data.

Quote:> The project I made for my master is available for download.

One problem if forsee, is that the data will be converted to explicit
polygons, and a scene designed for RenderMan spec renderers would more
likely be made of bilinear patches, in the case of flat surfaces, and
b-spline, NURBS, or subdivs in the case of curved surfaces.

In other words, I might be quite a substantial task to retrofit your scene
for RenderMan duty. Do the RenderMan pros agree?

I'm interested in this benchmarking idea, but maybe let's see if anybody
else has a more RenderMan friendly dataset.

Martin

 
 
 

Rman Benchmarking

Post by Will » Sat, 17 Feb 2001 10:56:08




>I'm interested in this benchmarking idea, but maybe let's see if anybody
>else has a more RenderMan friendly dataset.

>Martin

I was musing about this the other night.  I don't think you would want just one
scene.  It would be ideal to make a series of tests that would all stress
different aspects of a rendering system.

A Texture bound scene -- lots of image maps, which would have to be loaded and
managed.  Something like a few thousand images, or something like that.
Perhaps even something as simple as a large array of spheres, each with painted
plastic, and a different image map.

A Shader bound scene -- Not so much image maps, but some complicated shaders.
300 Octaves of noise, and lots of ray marching, and things like that.  Perhaps
on just a few spheres, or something like that.

A hider bound scene -- Lots of overlapping polygons.  Perhaps a few million of
them, all in front of each other, and intersecting, every rendered pixel would
have a hundred layers of other geometry behind the polygon in front.

A geometry intense scene -- Really complicated geometry.  Perhaps not all
overlapping, with lots of stuff behind what is visible, but just a massive rib
file, with very high resolution geometry.  Target here would be something like
a one gig RIB file.

Does anybody else think that these four tests would provide a reasonably useful
 (as useful as any other benchmark, I suppose...  ), test of rendering hardware
(and software)?  Are there any other areas of a renderer that ought to be
severely stressed?

Oh, and does anybody want to actually volunteer to make the scenes?!  :)
---------------------------
"I /want/ Moofeus!"
-The V E C T O R-

 
 
 

Rman Benchmarking

Post by Daniel McFarlan » Sat, 17 Feb 2001 12:45:03





> >The question is does anyone have a Mtor and/ or Rib file that is fairly
> >representative of a production rendered scene that could be used for the
> >bassis of the tests? Would  the Cornell Box mentioned on this list, be
> >suitable?

> Definitely not!  The Cornell Box scene -- which, let's face it, is
> basically only a couple dozen polygons -- is so simple as to be
> extremely misleading in any benchmarks.

> Creating renderer benchmarks is more art than science, and a rather
> tricky art at that.  It's very easy for any contrived test to be very
> uncharacteristic of real scenes.  I think that the only good
> benchmarks are suites of several actual scenes from real productions,
> and even then experience shows that a benchmark of scenes from one
> show may be misleading when extrapolating to performance of rendering
> frames from a different show.  Most unfortunately, nobody who makes real
> productions will let their scenes out of the house for people to
> benchmark with.  Even if they did, real scenes are quite big and
> unwieldy.

> I'd love to see a set of publicly available, fairly complex scenes
> that could be used as general benchmarks.  Constructing those in such
> a way as to make them truly representative is an extremely difficult
> task.

>         -- lg

> --
> Larry Gritz                                     Exluna


Larry,

Maybe my use of the term benchmark was inappropriate. I think the purpose of
this is a little looser than a true benchmark. I am beginning to look at
building a system for Mtor and I found the listing of results very
informative.

http://www.highend3d.com/tests/maya/testcenter/

But I would like to see PRman performs on different machines, mainly due to
the huge differences between the Maya renderer and PRman. Granted the scene
used for Maya is very simple, some thing similar following their  example
for PRman would be useful.

What types of features would be useful to include, shadows, procedurals,
texture maps, fog, glows, displacements, reflections, etc.? Which are more
computationally expensive? I would be glad to receive suggestions for the
creation of the scene if some one doesn't already have one.

Thanks,
Dan

 
 
 

Rman Benchmarking

Post by ogre » Mon, 19 Feb 2001 03:42:00


Quote:> A geometry intense scene -- Really complicated geometry.  Perhaps not all
> overlapping, with lots of stuff behind what is visible, but just a massive
rib
> file, with very high resolution geometry.  Target here would be something
like
> a one gig RIB file.

is prman capable of rendering such big file with less than 1 gig of ram?
 
 
 

Rman Benchmarking

Post by Paul Thomas Raugus » Mon, 19 Feb 2001 16:25:54


Even though my opinion is ridiculously light in terms of actual experience in
any of the various "rendering trenches" out there, I felt compelled to offer a
few innocent musings as to this benchmarking subject.  Not to step on anyone's
toes or anything, but let's not forget a few things while we are proposing
benchmarking strategies.  Let's even assume that we can put together a variety
of production-accurate RIBs for the sake of this little test.  Heck, I wouldn't
care about the RIBs from the short piece that I'm playing around with at the
moment (that being said, I'm only just starting, so it would be months before
I'm ready to output the RIBs), but the real problem with the benchmark proposal
lies more with the logistics than the accuracy of the benchmarking files.  I
mean, let's be realistic here.  First of all, let's imagine that someone was
actually philanthropic enough to write the RI to generate these complex files
(which, in and of itself would be inaccurate, given the highly artificial nature
of an RI synthesized glut of spheres, etc...).  After all, with the RI script,
we could sneak around the whole problem of pressing DVD-rom's or mailing DLT's
for the sake of benchmark distribution.  But, once we execute the API, we're
talking about a multi-gig data-set that needs local storage space.
Additionally, (and correct me if I'm wrong) to be production accurate, we're
talking render times in the hours per RIB, even on the fastest boxes.  Sorry if
I'm being a little too pessimistic, but I don't really know any statistically
viable body of testers out there who have 4 idle days that they can use to run a
benchmark.   Personally, I don't think that it is reasonable to propose any sort
of production-accurate benchmark, on any software.  In real practice, everyone
utilizes software [and hardware] in different ways, and what may be accurate for
one person, may be ludicrously simplistic or *ly complex for another.  I
would have to seriously doubt that any web authors are really taxing the
resolution limits in Photoshop, but I know for a fact that there are more than a
few artists and designers out there who have banged their heads on Photoshop's
ceiling.  All Benchmarks are synthetic to a certain extent, and practicality
typically dictates that concessions be made.  Do I really find that SYSMark 2000
or Content Creation Winstone 2000 accurately reflect the real-world usage
characteristics to which I subject my machines?  No, their usage patterns are
simply ridiculous, but they do provide me with a basis from which I might
reasonably extrapolate my performance threshold on a given system.  And that's
really the point of a benchmark in the first place.  The trick to the creation
of a benchmark is to input just enough complexity so as to be germane, but not
so much so as to cater only to a statistically insignificant handful of users.
And besides, anyone doing production-grade work will have a network of machines
configured expressly to handle the punishment dealt by production-grade frames.
It's not as though anyone is out there rendering at film-resolution on the 200
mhz PentiumPro Linux box with 64MB of RAM sitting in his ba*t, right?
Anyway, I'll leave everyone to more pressing matters...

Paul Thomas Raugust

 
 
 

Rman Benchmarking

Post by Andrew Broma » Mon, 19 Feb 2001 22:13:03


G'day all.


>>Target here would be something like a one gig RIB file.
>is prman capable of rendering such big file with less than 1 gig of ram?

Maybe.  Maybe not.  It depends on the scene.

If a lot of it is outside the viewing frustrum (and not overlapping
with the evil "eyesplits" zone), then that can be culled at parse
time.  If some of it is level-of-detail, then some of that can be
culled too.  And finally, a lot of "real" files would have more than
one image to be rendered in them (e.g. shadow maps and reflection
maps), and the data for one pass can be safely thrown out for the
next pass.

A ray tracing implementation probably would not be able to handle the
scene without some swapping to disk, though if it's a clever
implementation and the RAM deficit isn't critically bad it could
probably avoid bad thrashing.

Cheers,
Andrew Bromage

 
 
 

Rman Benchmarking

Post by Stephen H. West » Wed, 21 Feb 2001 04:39:46



> > A geometry intense scene -- Really complicated geometry.  Perhaps not all
> > overlapping, with lots of stuff behind what is visible, but just a massive
> rib
> > file, with very high resolution geometry.  Target here would be something
> like
> > a one gig RIB file.

> is prman capable of rendering such big file with less than 1 gig of ram?

As I understand it, that's just what prman is designed to do. And what
it does every day at Pixar. Last I heard, a typical RIB was over a
gig, and the render farm machines at Pixar were at 512MB.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.

 
 
 

1. Benchmarking of graphic workstations

Are there  benchmarking or performance statistics of various graphic
workstations that are comparing apples with apples without running the
exact same (recompiled of course) application on an:
IBM RS/6000 590 with a GXT1000
SGI Indigo2 with Maximum Impact graphics
HP755/125mhz with HCRX24Z
or something similar?

--

2. Xlib message

3. Benchmarking POV Ray1.0

4. FS: SuperMac ThunderStorm DSP card

5. Ray Tracing Benchmarking Platform

6. parenting in rib: WHY?

7. OT: Benchmarking CPUs with PSP

8. HELP: Resizing A Non-Resizeable Window

9. Benchmarking.....FPS=1?

10. can someone help me Benchmarking my System please?

11. benchmarking my computer

12. Benchmarking with 3dsMax

13. Benchmarking 3DSMAX on an SMP PC