Human Centered Interfaces

Computers in these days try to interpret certain human characteristics so as to react better. These characteristics include facial expressions, eyes gaze, body gait, speech etc. . Many applications such as virtual reality, videoconferencing, user profiling, customer satisfaction studies for broadcast and web services and interfaces for people with special needs require efficient facial expression recognition in order to achieve the desired results.

The basic facial expressions are defined as six anger, disgust, fear, happiness, sadness and surprise. A set of muscle movements (Facial Action Units-FAUs) was created to produce those facial expressions, forming the Facial Action Coding System ( FACS ).


An example of each facial expression for a poser from the Cohn-Kanade database

Facial expressions are generally hard to recognize as:

  • Every person expresses in a different way, no international patterns are available.
  • The conditions must be ideal, meaning that a full frontal pose of the poser has to be available.
  • The neutral state has to be found in videos in order to be able to define the fully expressive video frame and thus perform facial expression recognition.
  • No proper databases available and difficult to create a new one, as supervision from psychologists is required.

 

Our Method

A novel method that performs facial expression recognition using fusion of texture and shape information has been developed:

  • Extracts the texture information from the differences images of a video (calculated using the first and the last frame of the video) using the Discriminant Non-Negative Matrix Factorization (DNMF) algorithm.
  • Extracts the shape information using the geometrical information acquired from the method mentioned above (SVMs).
  • Fuses the above kinds of information using either SVMs or Radial Basis Function (RBF) Neural Networks (NNs).

By introducing fusion, certain confusions are resolved. For example, in cases where the gaze has changed, the geometrical information will not be important, while the texture information will be able to capture the change more properly.

The accuracy achieved is equal to 94.5% when achieving facial expression recognition and to 92.1% when achieving FAUs detection.


System architecture for the fusion system

 

Downloads

-

 

Relevant Publications

I. Kotsia and I. Pitas, "Facial expression recognition using shape and texture information", in Proc. of Int. Federation for Information Processing Conf. on Artificial Intelligence (IFIPAI 2006), Santiago, Chile, 21-24 August, 2006.

I. Kotsia, N.Nikolaidis and I. Pitas, "Fusion of Geometrical and Texture information for facial expression recognition", in Proc. of Int. Conf. on Image Processing (ICIP 2006), Atlanta, GA, USA, 8-11 October, 2006.

 

Research Projects

SIMILAR - The European research taskforce creating human-machine interfaces SIMILAR to human-human communication, IST, FP6

 

© 2006