When face patterns are subject to changes in view and illumination and facial shape, their distribution is highly nonlinear and complex in any space linear to the original image space. In this paper, we investigate into a nonlinear mapping by which multi-view face patterns in the input space are mapped into invariant points in a low (10-)dimensional feature space. The invariance to both illumination and view is achieved in two-stages: First, a nonlinear mapping from the input space to a low (10-)dimensional feature space is learned from multi-view face examples to achieve illumination invariance. The illumination invariant feature points of face patterns across views are on a curve parameterized by the view parameter, and the view parameter of a face pattern can be estimated from the location of the feature point on the curve by using least squares fit. Then the second nonlinear mapping, which is from the illumination invariant feature space to another feature space of the same dimension, is performed to achieve invariance to both illumination and view. This amounts to do a normalization based on the view estimate. By the two stage non-linear mapping, multi-view face patterns are mapped to a zero mean Gaussian distribution in the latter feature space. Properties of the nonlinear mappings and the Gaussian face distribution are explored and supported by experiments.