This thesis provides a fully automatic framework to analyze the facial actions and head gestures in real time. This framework can be used in scenarios where the machine needs a perceptual ability to recognize, model and analyze the facial actions and head gestures in real time without any manual intervention. Rather than trying to recognize specific prototypical emotional expressions like joy, anger, surprise and fear, this system aims to recognize the head gestures and the upper facial action units such as eyebrow raises, frowns and squints. These facial action units (AUs) are enumerated in Paul Ekman’s Facial Action Coding System (FACS) and are essentially building blocks, which can be assembled to form facial expressions. The system first robustly tracks the pupils using an infrared sensitive camera equipped with infrared LEDs. For each frame, the pupil positions are used to localize regions of eyes and eyebrow, which are analyzed using statistical techniques to recover parameters that relate to the shape of the facial features. These parameters are used as input to classifiers based on Support Vector Machines to recognize upper facial action units and their all possible combinations. The system detects head gestures using Hidden Markov Models that use pupil positions in consecutive frames as observations. The system is evaluated on completely natural dataset with lots of head movements, pose changes and occlusions. The system can successfully detect head gestures 78.46% of time. Recognition accuracy of 67.83% for each individual AU is reported and the system can correctly identify all possible AU combinations with an accuracy of 61.25%.