3D Painting on Scanned Surfaces

Maneesh Agrawala <[email protected]>
Andrew C. Beers <[email protected]>
Marc Levoy <[email protected]>

Abstract

1. Introduction

Painting systems are a very common tool for computer graphics and have been well studied for painting on 2D surfaces. While many two dimensional techniques can be applied to painting on 3D surfaces, there are issues that are unique to 3D object painting. The most important aspect in developing a 3D painting system is maintaining an intuitive, precise and responsive interface. It is crucial that the user be able to place color on the surface mesh easily and accurately.

Many computer graphics studios (including Pixar and Industrial Light and Magic) have developed their own 3D paint programs which use a mouse as the input device. These painting systems are often used to paint textures onto the 3D computer graphics models which they will then animate. The user paints on some two-dimensional image representing the three dimensional surface and the program applies an appropriate transformation to convert the 2D screen space mouse movements into movements of a virtual paintbrush over the 3D mesh. Hanrahan and Haeberli describe such a system for painting on three-dimensional parameterized meshes using a two-dimensional input device in [5]. The main feature of this system, and one which we retain in ours, is that painting is done directly on the mesh in a WYSIWYG (What You See Is What You Get) fashion. The drawback of this system is that the transformation from the 2D screen space to the 3D mesh may not always be immediately clear.

This type of system could be extended to use a 3D input device. Movements of a sensor through space would map directly to movements of the virtual paintbrush. Such a system might be difficult to use, however, because there would be no way to ``feel'' when the paintbrush is touching the mesh surface. This problem could be solved by providing the user with force-feedback, the importance of which is well recognized (see [2], [10], [4]).

In our system, 3D computer models are built from physical objects, so these objects are available to serve as a guide for painting. As 3D computer graphics applications have become widespread, the demand for 3D models has lead to the development of 3D scanners which can scan the surface geometry of a physical object. Turk and Levoy have recently developed a technique for taking several scans of an object and ``zippering'' them together to create a complete surface mesh for the object [11]. If a surface mesh has been derived from a physical object in this way, the quickest, most intuitive method for specifying where to paint the mesh would be to point to the corresponding location on the surface of the physical object.

Our approach is based on this idea. Given a physical object we scan its surface geometry. We then use a 6D Polhemus space tracker as an input device to the painting system. As we move the sensor of the tracker over the surface of the physical object, we paint the corresponding locations on the surface of the scanned mesh. The sensor of the space tracker can be thought of as a paintbrush, providing a familiar metaphor for understanding how to use our system.

The remainder of this paper is organized as follows. Section 2 describes the organization of our painting system. Section 3 details how our system represents meshes internally. Section 4 discusses the algorithms and methods we use for painting, registration, and combating registration errors. Our results are presented in section 5. Section 6 discusses possible future directions of this work, and section 7 summarizes our conclusions about our system.

2. System Configuration

3. Data Representation

4. Methods

4.1 Object--mesh registration

4.2 Painting

4.3 Brush effects

4.4 Combating registration errors

5. Results

6. Future Directions

7. Conclusions

8. Acknowledgments