The Best Efficiency Scheme

Project proposal - version 1.5vh


15th October, 1999, 18:25 PM (CETDST)


Vlastimil Havran (havran AT fel.cvut.cz),
Jiri Bittner (bittner AT fel.cvut.cz),
Jan Prikryl (prikryl AT cg.tuwien.ac.at)

[Vlastimil and Jiri are currently at the Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague, Jan is a PhD student at the Vienna University of Technology]


In the previous RTNews12n1 and in several older issues the problem of "the best efficiency scheme" was mentioned. The "best efficiency scheme" is a rather difficult issue --the term should describe an acceleration technique for ray-shooting that performs better than any other for an arbitrary scene. Finding of such a technique is a problematic, if not impossible task. Currently, there is no precise summary of how the numerous acceleration techniques perform in practice.

The problem of ray-shooting acceleration, in spite of its simple definition, has attracted many research efforts in the past. There are many areas of computer graphics and computational geometry that can profit from a very fast ray-shooting algorithm. Both computational geometry and computer graphics researchers have tried to develop fast ray-shooting acceleration schemes, with varying success. Kalos and Marton [1] proved that the lower bound on the worst-case complexity of ray-shooting is O(log N), where N is the number of objects. Algorithms with their worst-case complexity close to this lower bound can be tricky to implement and often exhibit memory complexity of O(N^2). Therefore these are not used in practice where scenes with more than a few thousand objects are the norm.

Acceleration techniques developed in the computer graphics community do not address the worst-case complexity problem. Instead, the average-case complexity is dealt with, although this complexity is often not theoretically evaluated.

There have been many contradictory statements regarding different ray-shooting acceleration schemes, many declaring different schemes to be better than the others. As no acceleration method has been found to be the best, we propose an alternative to the concept of the best efficiency scheme - we will try to find out whether there exists a statistically best scheme for a particular task.


What does this mean? Let us have S scenes and M acceleration methods. For example, let M be four, and let the methods be called A, B, C, and D. We can perform SxM experiments, shooting K random rays distributed according to some statistical distribution. After that, we can state for each tested scene which acceleration scheme was the most efficient one, which was the runner-up, which took the third place, which was the slowest one. As the result we can report, for example:


In order to achieve some level of statistical importance, S should be large enough. On the other hand, to make the project manageable, S has to be of reasonable size, say S=100.

Suppose we have S scenes available, S=100, with the following number of objects, when parsed.

Group 0: 15 scenes, 2-10 objects
Group 1: 15 scenes, 11-100 objects
Group 2: 15 scenes, 101-1000 objects
Group 3: 15 scenes, 1001-10000 objects
Group 4: 15 scenes, 10001-100000 objects
Group 5: 15 scenes, 1000001-1000000 objects
Group 6: 10 scenes, more than 1000000 objects

We can use some tool to analyze the scenes and attempt to characterize them, for example: sparseness, sparseness variance and standard deviation, non-uniformity, and so on. Several methods evaluating scene characteristics were introduced for this purpose, here we refer to the PhD thesis of Klimaszewski [2, chapter 4]. His method of scene characterization is based on the presence of objects in voxels of a uniform grid. We consider testing scene complexity characteristics discussed by Feixas et al. [5] and Cazals and Sbert [5] as well.

Then we define several ray shooting tasks for all the M acceleration methods and all the S scenes. Let us consider now the ray generation scheme, which should be similar to the schemes used in global illumination algorithms. One way to attack the problem is to use a completely random generation scheme, for example, generate two random points on the sphere, and generating the ray between these two points. This scheme is actually a uniform distritution of rays, but it suffers from the lack of coherency between two successive rays. Many schemes, such as the 5D acceleration technique, use ray coherence to speed up ray shooting, particularly for shadow rays. The scheme proposed below has a uniform distribution of rays and preserves the coherency at the same time:

RAY_GENERATION_SCHEME (acceleration_scheme, scene):

  for (alpha = -PI to PI step N)
    for (beta = -PI/2 to PI/2 step N/2) {

      generate point P on a unit sphere using
        spherical coordinates (alpha,beta)

      for (gama = -PI to PI step N)
        for (delta = - PI/2 to PI/2 step N/2) {
	  generate point Q on a unit sphere using 
	    spherical coordinates(gama, delta);

          Construct ray R with origin P and direction Q-P

	  Shoot the ray "R" into the "scene"
            using "acceleration_scheme"
        }
  }

Now we propose the following testing methods:

a) Compute the center of the scene bounding box, create the bounding sphere that encloses this bounding box. Use the ray generation scheme above so the number of generated rays, say K is 4e8, which is statistically sufficient.
b) Assign a tight rectangular bounding box to each object in the scene. Then compute for each such bounding box i its center point Q_i, and compute one center point Q = sum Q_i/N for all the bounding boxes, where N is the number of objects/bounding boxes in the scene. Then, using a binary search, find a bounding sphere that contains 90% of all bounding boxes. For this bounding sphere continue as for a).
c) Compute L random walks of fixed length k, say L=250000 and k=4. Random walks have a common origin as close to the center of the 90% bounding sphere as possible. Rays are reflected using diffuse BRDF sampling (as in radiosity) or in a completely random manner. The origin point can be found using an algorithm similar to that used in [5] for classification of global line segments.
d) Compute a specific rendering task for the scene, for example, ray tracing as defined in SPD. This requires the definition of the viewpoint in the scene to get a nice image and the material definitions for the scene.

Test a) uses the whole scene. Rays are shot from the outside into the scene. Tests b) and c) shoot rays in the space where the most objects are present. Obviously, we can imagine the construction of artificial scenes, where this method will not fulfill its purpose. Random sequences used for all three tests start with the same seed values, keeping generated rays the same regardless of the acceleration scheme being actually tested.

Test d) reports results of ray-shooting for ray tracing to show how the results of a), b), and c) correlate with the use of ray-shooting in some practical global illumination task.

There are two situations when the results cannot be reported; when the memory is not sufficient to build up the acceleration data structure and when the computation time for one task will take more than 10 hours.

When testing the acceleration methods, we will find out several common quantities, that can be reported for every acceleration method:

a) number of traversal steps per ray.
b) number of object intersection tests per ray.
c) ratio of all intersection tests per ray and successful intersection tests (when the ray really hits an object) per ray.
d) user time needed for a particular ray shooting task.

For hierarchies/spatial subdivision, several other quantities can be reported.

After testing all the scenes and methods we can find out:

1) if a prevalent method for all the scenes or within the groups does or does not exist (speed).
2) which method is the most efficient one for the sum of all the scenes and within the group (i.e. robustness of performance).
3) the best method for each scene (our idea is that if someone is going to render a scene, she or he can compute its characteristics as we did, find such a scene from our scene database, that exhibits roughly the same characteristics, and use such an acceleration method and its settings, that are probably the best ones).

What has been already prepared for this project ?

What is missing ?

To conclude the proposal, the project will be important not only to end the dispute about the "best efficiency scheme" from the theoretical point of view, but also for practitioners as well, since we plan to release the source code of acceleration methods in some suitable form on the WWW after the project is finished.

Any comments and support are welcome to the e-mails listed above.


References:

[1] L.S. Kalos and G. Marton: Analysis and construction of worst-case optimal ray shooting algorithms, C&G, Vol. 22, No. 2, pp. 167-174, 1998.
[2] K.S. Klimaszewski: Faster ray tracing using adaptive grids and area sampling, PhD Thesis, Dept. of Civil and Environmental Engineering, Brigham Young University, 1994.
[3] GOLEM Raytracing System, http://www.cgg.cvut.cz/GOLEM
[4] F.Cazals, M.Sbert: Some Integral Geometry tools to estimate the complexity of 3D scenes, Research Report IIiA 97-04-RR, Universidad de Girona, 1997. Also as INRIA Research Report RR-3204.
[5] M.Feixas, E.Acebo, P. Bekaert, and M.Sbert: An information theory framework for the analysis of scene complexity, Eurographics'99.

Valid HTML 4.0! This page is maintained by Vlastimil Havran. It was last updated on October 15, 1999.
If you have any comments, please send a message to adresses listed above.