The Best Efficiency Scheme

Project proposal - version 1.90jp


1st April, 2000, 08:45 AM (CETDST)


Vlastimil Havran (havran AT fel.cvut.cz),
Jiri Bittner (bittner AT fel.cvut.cz),
Jan Prikryl (prikryl AT cg.tuwien.ac.at)

[Vlastimil and Jiri are currently at the Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague, Jan is a PhD student at the Vienna University of Technology]


In the previous RTNews12n1 and in several older issues the problem of "the best efficiency scheme" was mentioned. The "best efficiency scheme" is a rather difficult issue---the term should describe an acceleration technique for ray-shooting that performs better than any other for an arbitrary scene. Finding of such a technique is a problematic, if not impossible task. Currently, there is no precise summary of how the numerous acceleration techniques perform in practice.

The principle of ray-shooting is acutally simple: find the first object intersected by a givenr ay for a given set of objects. In spite of this simple definition, implementing an efficient ray-shooting algorithm is a nontrivial tasks: A naive ray-shooting implementation has to test all scene objects in order to find the closed one, resulting in unacceptable constant complexity of O(N), where N is the number of scene objects. In order to make ray-shooting practicable for complex scenes as well, some ray-shooting acceleration scheme (RAS) of lower complexity is required (Kalos and Marton [1] proved that the lower bound on the worst-case complexity of ray-shooting is O(log N)).

The problem of ray-shooting acceleration has attracted many research efforts in the past. There are many areas of computer graphics and computational geometry that can profit from a very fast ray-shooting algorithm. Both computational geometrers and computer graphics researchers have tried to develop fast RASs with varying success. Computational geometers aimed their effort at improving the worst-case complexity. Unfortunately, the acceleration methods developed for purposes of computational geometry have unacceptable space and preprocessing time complexity (often O(N2)), and are therefore not usable for complex scenes used in rendering praxis. In contrast to techniques known form computational geometry, acceleration techniques developed in the computer graphics community are ofthen heuristics improveing the average-case complexity. Even if the worst0-case complexity of these methods is unfaourable (worst-case complexity O(N)), the good average-case complexity (up to O(1) for scenes with randomly distributed objects) is the reason why these heuristic RASs are commonly used in recent rendering packages.

The performance of RASs in praxis is usually measured on a group of test scenes. nfotunately, a fair comparioson of the pubished work is very difficult as people use different reference algorithms, differerent comparison characterisitcs, and different types of test scenes. As a result, there are many contradictory statements regarding different RASs floating around, many declaring different schemes to be better than the others.

In the search for the best efficiency scheme, no RAS has been found to be the generally best one up to now. In order to make at least some practical evaluation possible, we therefore propose an alternative to the concept of the best efficiency scheme - we will try to find out whether there exists a statistically best scheme for a particular task.


What does this mean? Let us have S scenes and M acceleration methods. For example, let M be four, and let the methods be called A, B, C, and D. We can perform S×M experiments, shooting K random rays distributed according to some statistical distribution. After that, we can state for each tested scene which acceleration scheme was the most efficient one, which was the runner-up, which took the third place, which was the slowest one. As the result we can report, for example:



In order to achieve some level of statistical importance, S should be large enough. On the other hand, to make the project manageable, S has to be of reasonable size, say S=100.

Suppose we have S scenes available, S=100, with the following number of objects, when parsed.

Group Scenes Objects in scene
0 15 2-10
1 15 11-100
2 15 101-1000
3 15 1001-10000
4 15 10001-100000
5 15 1000001-1000000
6 10 > 1000000

We can use some tool to analyze the scenes and attempt to characterize them, for example: sparseness, sparseness variance and standard deviation, non-uniformity, average number of intersections, mutual information, and so on. Several methods evaluating scene characteristics were introduced for this purpose. Klimaszewski [2, chapter 4] characterises scene using statistcs based on the presence of objects in voxels of a uniform grid. Cazals and Sbert [4] use integral geomtery tools to reveal the spatial dostribution of scene objects. Feixas et al. [5] use the concept of mutual information for the same prupoase.


We define several ray-shooting tasks for all the M RASs and all the S scenes. A general rey-shooting scheme used in our experiments should be similar to schemes used in rendering algorithms. One way to approach the problem is to use a completely random generation scheme, for example, generate two random points on the sphere, and generating the ray between these two points. This scheme is actually a uniform distritution of rays, but it suffers from the lack of coherency between two successive rays. Many schemes, such as the 5D acceleration technique, use ray coherence to speed up ray shooting, particularly for shadow rays. The scheme proposed below has a uniform distribution of rays and preserves the coherency at the same time:

RAY_GENERATION_SCHEME (M: number of rays, B: number of bands)
{
  Subdivide sphere into numBands bands regularly along z, 
  the bands will have the same surface area

  Find N such that M = N(N-1)

  sOne  = 1/N * 4.0*PI/3.0 ;

  alpha = 0.0 ;
  bCurr = 0 ;

  for (i = 1 to N)
  {
    sCurr = 0.0
    while (sCurr < sOne)
    {
       Numerically integrate the band area along alpha,
       switching to the next band bCurr if necessary
    }
    point[i].z = middle of bCurr ;
    radius = sqrt(1.0 - sqr(point[i].z) ;
    point[i].x = radius * cos(alpha) ;
    point[i].y = radius * sin(alpha) ;
  }

  for (i = 1 to N)
    for (j = 1 to N)
    {
      if (i != N)
        Shoot a primary ray between point[i] and point[j]
    }
}

Now we propose the following testing methods:
a) Compute the center of the scene bounding box, create the bounding sphere that circumvents this bounding box. Use the ray generation scheme presented above to shoot primary rays into the scene.
b) Assign a tight rectangular bounding box to each object in the scene. Then compute for each such bounding box i its center point Qi, and compute one center point Q = sum Qi/Nfor all the bounding boxes. Then, using a binary search, find a bounding sphere that contains 90% of all bounding boxes. For this bounding sphere continue as for a).
c) Compute random walks of fixed length k = 4. Random walks have a common origin as close to the center of the 90% bounding sphere as possible. Rays are reflected in a completely random manner. The origin point can be found using an algorithm similar to that used in [5] for classification of global line segments.
d) Compute a specific rendering task for the scene, for example, ray tracing as defined in SPD. This requires the definition of the viewpoint in the scene to get a nice image and the material definitions for the scene.


Test a) uses the whole scene. Rays are shot from the outside into the scene. Tests b) and c) shoot rays in the space where the most objects are present. Obviously, we can imagine the construction of artificial scenes, where this method will not fulfill its purpose. Random sequences used for all three tests start with the same seed values, keeping generated rays the same regardless of the acceleration scheme being actually tested.

Test d) reports results of ray-shooting for ray tracing to show how the results of a), b), and c) correlate with the use of ray-shooting in some practical global illumination task.

There are two situations when the results cannot be reported; when the memory is not sufficient to build up the acceleration data structure and when the computation time for one task will take more than 10 hours.

When testing the acceleration methods, we will recored several common quantities, that can be reported for every acceleration method:

After testing all the scenes and methods we can find out:

1) if a prevalent method for all the scenes or within the groups does or does not exist (speed).
2) which method is the most efficient one for the sum of all the scenes and within the group (i.e. robustness of performance).
3) the best method for each scene (our idea is that if someone is going to render a scene, she or he can compute its characteristics as we did, find such a scene from our scene database, that exhibits roughly the same characteristics, and use such an acceleration method and its settings, that are probably the best ones).

The project shall help us clarify the follwing points:



What has been already prepared for this project ?

What is missing ?

To conclude the proposal, the project will be important not only to end the dispute about the "best efficiency scheme" from the theoretical point of view, but also for practitioners as well, since we plan to release the source code of acceleration methods in some suitable form on the WWW after the project is finished.

Any comments and support are welcome to the e-mails listed above.


Updates

2000/03/30
Measurements on standard SPD scenes completed (except "shells" which makes GOLEM choke on building accleration scheme due to densely overlapping objects).
2000/04/03
Revamp of the text, some bits added here and there. Vlastimil now has 215 scenes collected by his students, most of them have 5-50k primitives. Surprisingly, scenes containing less than 100 primitives are virtually unavailable (well, all those simple VRML mods that you can download form Internet have ususally at least 200 patches). Unsurprisingly, scenes with more than 500k elements are not available for free. The small scenes have been modelled, the large ones are the main problem that we have to solve somehow now.


References:

[1] L.S. Kalos and G. Marton: Analysis and construction of worst-case optimal ray shooting algorithms, C&G, Vol. 22, No. 2, pp. 167-174, 1998.
[2] K.S. Klimaszewski: Faster ray tracing using adaptive grids and area sampling, PhD Thesis, Dept. of Civil and Environmental Engineering, Brigham Young University, 1994.
[3] GOLEM Raytracing System, http://www.cgg.cvut.cz/GOLEM
[4] F.Cazals, M.Sbert: Some Integral Geometry tools to estimate the complexity of 3D scenes, Research Report IIiA 97-04-RR, Universidad de Girona, 1997. Also as INRIA Research Report RR-3204.
[5] M.Feixas, E.Acebo, P. Bekaert, and M.Sbert: An information theory framework for the analysis of scene complexity, Eurographics'99.


Valid HTML 4.01! Valid CSS! This page is maintained by Vlastimil Havran. It was last updated on April 1, 2000.
If you have any comments, please send a message to adresses listed above.