Keep 3DLuVr online!
3DLuVr Logo
 From the Real World
 Digital Painting Series
 Featuring of...
 On the Bookshelf
 3ds max
 Softimage XSI
 Rhinoceros 3D
 Video Tutorials
FunZone menu
 I always wanted to be
 Talk to an employer
 Why Ask "Why"
TechZone menu
 Hardware Reviews
 Software Reviews
 Q&A, Tips & Tricks
UserZone menu
 The Artist Sites
 15 Min of Fame
 Request an Account
 Current Assignment
 Sponsors & Prizes
 Make a Submission
 Voting Booth
 Competition Rules
About menu
 Mission Statement
 Poll Archive
 How to IRC
Log in to be able to post comments to the news items, forum posts, and other facilities.
Not registered? Register!     Lost Password?
 Your New Year`s Resolution is...
Gain employment
Stop smoking/drinking/etc
Get back in shape
Find the meaning of life
Conquer the World
Absolutely nothing

    Poll Results
Want to leave us a comment about the site or in general? Click here to access the form.
TechZone Heading
Redefining Speed (A look at In-House render tests)
Added on: Fri May 17 2002
Page: 1 3 4 5 6 7 

The Problems (cont.)

Through a variety of user suggestions and discussions, it was decided that either a weight system had to be implemented or the benchmarks changed in such a way as to reflect a more even distribution of single vs. multithreaded systems.
One user argued that due to the nature of most 3d applications being single-threaded, the benchmark data still stood. But this wasn’t the final problem; there was one more pressing issue occurring.

The systems were getting too fast. The latest batch of Dual Xeon and Athlon systems were tearing the benchmarks to shreds. One of the fundamental problems with using benchmarks to record performance is the amount of time it takes to complete them. Quicker renders spend more time in single threaded operations, instead of multi-threaded operations. What might take 17 vs. 21 seconds at one resolution, could jump as much as 40% at Film resolution.

It just takes time to showcase the true differences in performance amongst these powerful systems. How much faster is X system then Y if the benchmarks only report a few seconds difference? This even becomes more apparent when we look at the average use of 3d applications, in which renders can take extended hours, days, and even weeks to complete.

At the current trend of technology advancement, a machine capable of finishing all nine scenes in a minute or two isn’t too far off.
A new more refined method needed to be designed to better showcase the differences between these systems.

A Solution?

In-House testing was the only solution. By testing the systems ourselves, we eliminated the user level "padding", and we’d be able to run tests, which were substantially longer, and less single/multi threaded, biased. We’d also be able to run the tests multiple times to get a mean, and provide statistical data, further enhancing the validity of our results. To put it simply, we’d pimp slap technology and make it our bitch.

The New Scenes

The first decision was to keep the original nine renders. Islands, Waterfall, and Apollo will still be rendered for each system tested.
This data will be displayed along with the user submitted data, with a + sign after it, to designate a verified benchmark.

Additionally, systems tested at, will undergo an additional twenty tests, across eight new scenes. Finally a software Heidi test will be run, to compare the performance in the viewports across a variety of Cpu’s and chipsets. These tests will be run exactly the same as laid out in the previous benchmark article The Creation of a Benchmark.

So without further adieu, here’s the set of new scenes. Each of which are run at NTSC resolution, 1024x768 (Web), and Film resolutions.

(These are not available for download so don’t bother asking. These are in-house tests and will not be reproduced on user machines. That’s what the Islands, Apollo, and Waterfall tests are for)

© 1997-2021 3DLuVrTM (Three Dee Lover)
Best viewed in 1024x768 or higher,
using any modern CSS compliant browser.