My experience in ACII 2011

I have attended several sessions of the conference, Affective Computing and Intelligent Interaction (ACII) 2011 and an associated workshop named Machine Learning and Affective Computing (MLAC). It was held from 9th to 12th of Oct. It was a great experience to attend a conference directly related to my working area. Several important points in current research trends came up in the sessions which worth some thoughtful discussions.

The Use of Active Appearance Models

With Professor Jeffrey Cohn

Renowned researcher of psychology Professor Dr. Jeffrey Cohn, in his presentation “Machine Learning for Affective Computing: Findings and Issue” showed how Active Appearance Model has paved the way for some of the experiments which were not possible to conduct earlier. For example, the capability of Active Appearance Models to accurately model and tack a person’s head movement and facial expressions is utilized to generate almost real time (~0.5 sec delay) avatars. These avatars are used in interaction between two people in several psychological experiments. For example, it was shown that, people can interact as easily with an avatar as a normal human being. However, these interactions can be manipulated by reducing the amount of expressions (which can be done by computationally reducing the expressions expressed by the avatars) or by immobilizing the head movement or by changing the gender of the avatar (A male showing a female avatar and vice versa) etc. In those cases, people do not identify such interactions as spontaneous human-human interactions. Prof Cohn also showed the effect of smile dynamics of faces i.e. how the “spontaneity” in smile is related with the temporal evolution. In a later presentation named “FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding” Jeff Cohn also showed how the manual landmark annotation effort can by reduced by 40% through the use of some semi-automatic annotation process.

Human centric Machine Learning

Some exciting achievements are possible to make by incorporating humans in different aspects of machine learning. This point is underscored by the invited speaker from Microsoft research, Ashish Kapoor, in his presentation “Human-Centric Machine Learning”. He mentioned that, it is possible to use human interactions more than just data labeling. He showed several of his works on how it is possible to use human interactions to rectify the decision boundaries of classifiers, incorporate experience and use in many other exciting ways.

More on Smile Dynamics

More research on the types of smile has been shown in the paper “Are You Friendly or just Polite? – Analysis of smiles in spontaneous face-to-face interactions” presented by Mohammed Hoque, Louis-Philippe Morency and Rosaline Picard. It has been shown that most of the spontaneity information lies on what time it takes for a smile to rise to the peak from the onset and decay to the offset from the peak. Mohammed Hoque mentioned that MIT media lab will release the dataset that has been used in this work. I believe it will give us a rare opportunity to access some spontaneous dataset just for free.

Interactions and Socialization

With Professor Rosaline Picard, the founder of Affective Computing

This conference gave me a wonderful opportunity to see and discuss with the leading researchers in affective computing. I talked personally with Jeffry Cohn, Rosaline Picard, Peter Robinson, Louis-Philippe Morency and asked few questions regarding current trends of research in affective computing. In a conversation with Prof Cohn on the channels of emotion, he mentioned that he is a bit skeptical on how far it is possible to infer emotion appropriately from physiological channels. He mentioned two reasons for that. Firstly, physiological responses are relatively slower than the response showed in facial expressions. Secondly, sensors required to measure physiological signals can alter the emotions to some extent.

Professor Rosaline Picard highly appreciated the project concept of blind emotion. She mentioned that it is possible to help blind people in great extent to perceive others emotion. MIT Media lab did some preliminary investigations on these issues. She specifically pointed that it will not be a good idea to convey the facial expressions (like smile, frown etc.) to the people who are congenitally blind. Rather, it is better to convey some inferred information which makes more sense to them. For example, whether other people are interested on what a blind person is speaking etc. Prof Picard also pointed on a possible difficulty that their lab faced. She mentioned that it becomes very difficult to track and analyze a person’s face with the inherent head movement of the wearer. She also welcomed any help and collaboration that may be asked from our side.

Dr. Louis-Philippe Morency showed an extra ordinary enthusiasm to our effort on analyzing

With Professor Peter Robinson

facial expressions and works with active appearance model. He mentioned that he is going to release a more robust version of Constrained Local Model (CLM) based face tracker. This tracker will be released in public domain. He also expressed his ideology on how research related data and tool should be disseminated for public use. He also gave word to entertain our request for this face tracker as much as possible.