Unified Dimensionality Reduction: Formulation, Solution and Beyond

  • Shuicheng Yan | University of Illinois

In this talk, I will address the feature dimensionality reduction problem within a unified framework from three aspects:

  1. Graph Embeddingand Extensions: A unified framework for general dimensionality reduction

In the past decades, a large family of algorithms-supervised or unsupervised; stemming from statistics or geometry theory-has been designed to provide different solutions to the problem of dimensionality reduction. Beyond different motivations of these algorithms, I present a general formulation known as graph embedding to unify them in a common framework. Under graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of some specific intrinsic graph characterizing certain desired statistical or geometry property of a data set. Furthermore, the graph embedding framework can be used as a general platform to help develop new algorithms for dimensionality reduction, which is validated with example algorithm called Marginal Fisher Analysis (MFA).

  1. Trace Ratio: A unified solution for general dimensionality reduction

A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of arg maxWTr(WT Sp W) / Tr(WT Sl W), which is generally transformed into the corresponding Ratio Trace form arg maxWTr[ (WT Sl W)-1(WT Sp W) ] for obtaining a closed-form but inexact solution. I propose an efficient iterative procedure to directly solve the Trace Ratio problem. In each step, a Trace Difference problem arg maxWTr[WT (Sp-λ Sl) W] is solved with λ being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value λ, are proven based on point-to-set map theories.

  1. Element Rearrangement for Promoting Tensor Subspace Learning

I will introduce an algorithm on how to promote tensor based subspace learning by rearrange the element position within a tensor. Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm.

Speaker Details

Dr. Shuicheng Yan (http://www.ifp.uiuc.edu/~scyan/) is currently a Postdoctoral Research Fellow with Prof. Thomas Huang at the Image Formation and Processing Lab, University of Illinois at Urbana-Champaign. His research interests include Event Analysis in Surveillance Analysis, Face Analysis (identity, age, pose, emotion, 2D/3D shape), Dimensionality Reduction Techniques (subspace learning, manifold learning, and tensor-based learning), Data Mining on the Web, and Industrial Applications. Before joining Prof. Huang’s group, he helped lead the Face Recognition Project sponsored by the Innovation and Technology Fund (ITC/ITF) of Hong Kong, at the Multimedia Lab of the Chinese University of Hong Kong, which was headed by Prof. Xiaoou Tang. Mentored by Dr. Hong-Jiang Zhang, he was an long term intern at MSRA for three years, during which he also fulfilled his Ph.D. degree on June 2004, at the Applied Mathematics Department (advisor: Prof. Qiansheng Cheng), Peking University, China.

    • Portrait of Jeff Running

      Jeff Running