Subtle Facial Expression Recognition Becomes Practicable (2009.4.5)
In the Steven Spielberg movie A.I., robots not only resemble humans in looks but read their facial expressions, reacting accordingly. With the new technology developed by Professor Daijin Kim (Department of Computer Science and Engineering) and his group, the movie may soon become reality.
Automatic expression analysis is difficult because people’s faces vary so much. Existing software could effectively recognize only extreme expressions, which is not particularly helpful in real-life situations. But the system developed by Professor Kim’s team recognizes even slight facial movements.
The technology, which automatically reads on people joy, anger, sadness and surprise, the 4 primary facial expressions, uses motion magnification which converts subtle expressions into exaggerated expressions.
Professor Kim’s group programmed software to identify 27 facial feature points, such as the eight points around the mouth and three around the nostrils, and to track how each point moved in a sequence of images for each facial expression. The system then exaggerates the movements in each sequence, generating extreme facial expressions which existing software can identify.
In tests in which 80 subtle expressions of neutral, happy or angry feelings were exaggerated in this way, the system judged them correctly 88% of the time.
The team has used the technique to allow a robot with a human-like face to read and mimic a person’s subtle expressions. They also hope to install the software on digital cameras and camera phones, says Professor Kim, so the devices can do things like automatically take a picture when the subject smiles.
When applied to human sensing, the technique could also be used in areas including biometric information system, smart home control, rehabilitation and health services, human machine interaction, and secretarial services. The primary goal is to apply and utilize the technology to advance the quality of life for the elderly and disabled.
The research is jointly conducted with Professor Takeo Kanade’s group of the Robotics Institute at Carnegie Mellon University, with the financial support of the Korean Ministry of Education, Science and Technology.
Professor Daejin Kim
Department of Computer Science and Engineering