3D Painting on Scanned Surfaces

Maneesh Agrawala <[email protected]>
Andrew C. Beers <[email protected]>
Marc Levoy <[email protected]>

Abstract

1. Introduction

2. System Configuration

3. Data Representation

Previous work in 3D painting has only allowed painting on parameterized meshes, or on meshes that have texture coordinates previously assigned at each mesh point. Paint or surface properties applied to these meshes can be stored in a texture map, in the former case using the parameter values at points on the mesh as texture coordinates. While Maillot, Yahia, and Verroust have developed a method for parameterizing smooth surface representations [9], there are no general techniques for parameterizing arbitrary surface meshes.

Although a single Cyberware scan results in a parameterized triangle mesh, suitable for use by other 3D painting systems, such a mesh is generally not a complete description of the object. This incompleteness is due to self-occlusions on the object, making some points on the object invisible to a rotational scan. By combining data from multiple scans, Turk and Levoy's zippering algorithm [11] produces a more complete mesh for the object. However, the resulting mesh is irregular and unparameterized, so we lose the ability to store surface characteristics in texture maps.

To paint on unparameterized meshes, we store surface characteristics (e.g. color and lighting model coefficients) at each mesh vertex. When painting on the object, these surface characteristics are changed only at the mesh vertices. We render the mesh using the SGI hardware Gouraud shading to interpolate the color between the vertices of triangles composing the mesh. Because we do not require regular or parameterized meshes, our algorithm works with meshes acquired from many different kinds of scanning technologies, including hand digitizers, CT scanners and MRI scanners. CT and MRI scanners produce volume data rather than a surface mesh and so an algorithm like marching cubes [8] would be required to convert the volume data set into a suitable mesh representation.

Since we only have color information at the vertices of the mesh polygons, the polygons should be small enough to avoid sampling artifacts when displaying the mesh. As Cook, Carpenter and Catmull point out in their description of the REYES rendering architecture [3] this is possible when polygons are on the order of a half pixel in size. Due to memory constraints we typically paint on meshes in which triangles are about the size of a pixel when the mesh is displayed at a ``reasonable'' size (e.g. a quarter of the size of the monitor). We have implemented controls for scaling the display of the mesh so that it is always possible to reduce its display size to achieve subpixel color accuracy.

Since we would like to use a mesh with small triangles, the number of triangles in a typical mesh may be quite large. We therefore need to augment the triangle mesh with a spatial data representation that will allow us to find mesh vertices quickly. To facilitate this, we uniformly voxelize space. Associated with each voxel is a list of vertices on the mesh that are contained in that voxel. Storing these voxels in a hash table gives us nearly constant-time access to any vertex on the mesh, given a point close to it. Alternatively we could have used a hierarchical representation such as an octree for storing the spatial representation.

We do not use a simple 3D array indexed by voxel location because most meshes will contain large empty regions in voxel space. By using a hash table, we do not explicitly store the empty regions of voxel space, which results in a tremendous reduction in memory usage.

4. Methods

4.1 Object--mesh registration

4.2 Painting

4.3 Brush effects

4.4 Combating registration errors

5. Results

6. Future Directions

7. Conclusions

8. Acknowledgments