Texture Synthesis by Fixed Neighborhood Searching

Li-Yi Wei, Ph.D. dissertation, Stanford University, November 2001

Abstract:

Textures can describe a wide variety of natural phenomena with random variations over repeating patterns. Examples of textures include images, motions, and surface geometry. Since reproducing the realism of the physical world is a major goal for computer graphics, textures are important for rendering synthetic images and animations. However, because textures are so diverse it is difficult to describe and reproduce them under a common framework.

In this thesis, we present new methods for synthesizing textures. The first part of the thesis is concerned with a basic algorithm for reproducing image textures. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. The algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. Because of the use of this deterministic searching, our algorithm can avoid the computational demand of probability sampling and can be directly accelerated by a point searching algorithm such as tree-structured vector quantization.

The second part of the thesis concerns various extensions and applications of our texture synthesis algorithm. Texture synthesis can be used to remove undesirable artifacts in photographs and films such as scratches, wires, pops, or scrambled regions. We extend our algorithm for this purpose by replacing artifacts with textured backgrounds via constrained synthesis. In addition to 2D images, textures can also be used to model other physical phenomena such as 3D temporal textures such as fire, smoke, and ocean waves, as well as 1D articulated motion signals such as walking and running. Despite the diversity of the dimensionality and generation process of these textures, our algorithm is capable of modeling and generating them under a common framework.

Most existing texture synthesis algorithms take a single texture as input and generate an output texture with similar visual appearance. Although the output texture can be made of arbitrary size and duration, those techniques can at best replicate the characteristics of the input texture. In the fourth part of this thesis, we present a new method that can create textures in interesting ways in addition to mimic existing ones. The algorithm takes multiple textures with probably different characteristics, and synthesizes new textures with combined visual appearance of all the inputs. We present two important applications of multiple-source synthesis: generating texture mixtures and solid textures from multiple 2D views.

In the fifth part of the thesis, we provide designs and extensions to target our algorithm for real-time graphics hardwares and applications. Unlike certain procedural texture synthesis algorithms which can evaluate each texel independently on the fly, our algorithm requires texels to be computed sequentially in order to maintain the consistency of the synthesis results. This limits the feasibility for applying our algorithm for real-time applications. We address this issue by presenting a new method that allows texels to be computed in any order while guarantees the invariance of the results, thus making it useful for real-time applications. We also present possible hardware designs for a real-time texture generator so that it can replace the traditional texture mapping hardwares.

In the last part of the thesis, we analyze our algorithm behavior and discuss potential future work.

Available information:


[email protected]