Uniformally textured surfaces in 3D scenes provide important cues for image understanding. Texture can be used for both segmentation and for 3D shape inference. Unfortunately, virtually all current algorithms are based on assumptions that make it impossible to do texture segmentation and shape-from-texture in the same image. Texture segmentation algorithms rely on an absence of 3D effects that tend to distort the texture. Shape-from-texture algorithms depend on these effects, relying instead on the texture being already segmented. To really understand texture in images, texture segmentation and shape-from-texture must be viewed as a combined problem to be solved simultaneously. We present a solution to this problem with region-growing algorithm that explicitly accounts for perspective distortions of otherwise uniform texture. We use the image spectrogram to compute local surface normal, which are in turn used to “frontalize” the texture. These frontalized texture patches are then subjected to a region-growing algorithm based on similarity in the local frequency domain and a minimum description length criteria. We show result of our algorithm on real texture images taken in the lab and outdoors.