Emotion Recognition

Facial coding: a natural, frictionless way to measure

attention and emotional response

Proprietary technology that’s built on the facial action coding system

The Facial Action Coding System (FACS) was developed by Paul Ekman to categorise human facial movements.

It works as an automated computer system which can categorise human emotions according to changes or movements of the face. Facial expressions are broken down into the individual Action Units that make up a specific expression over time. The output of your collected audience emotions are aggregated and displayed within the dashboard as seven emotion metrics and three proprietary metrics for measuring the emotional experience.

Happy
Happiness is synonymous with a smile. It consists of the following action units: AU 6+12, which indicate the cheeks raising and the corners of the mouth pulling up respectively.
Surprised
Synonymous with a 'shocked' expression, surprise consists of the following action units: AU 1+2+5B+26, which is a combination of raised eyebrows, eyes wide open (raised eyelids) and the jaw dropping to reveal and open mouth.
Confused
Confusion is synonymous with a lowering of the brow. It consists of the following action units: AU 4+5+7+23, which is a combination of lowering the brow, raising and narrowing of the eyelids, and a tightening of the lips.
Sadness
Sadness or empathy is synonymous with the classic downturned mouth. It consists of action units: AU 1+4+15, which represent lowered brows but upturned at the inner ends, and the depression of the lips.
Disgust
An expression of distaste, disgust consists of action units: AU 9+15+16, which represents nose wrinkling, the downturn of the lower lip and the corners of the mouth.
Scared
Synonymous with a fearful expression, Scared is represented by the action units: AU 1+2+4+5+20+26, which indicate the eyebrows raising or lowering, the corners of the lids raising, the lips stretching and the jaw dropping.
Contempt
Contempt is synonymous with a tightened and raised lip corner on one side of the face. It is a feeling of dislike and superiority over another.
Previous
Next

Going Beyond the Basic Emotions

Our 3 Proprietary Metrics

In addition to our emotion metrics, we also track a variety of proprietary metrics. These have been derived through our own research and can be used in conjunction with basic emotions classifiers to gain deeper insights into the content tested.

  1. Engagement When a participant has an expressive reaction to a stimulus, they are said to be ‘emotionally engaged’. It represents the percentage of participants who showed any reaction from the entire audience – either at a particular second (second-by-second as seen on the dashboard charts) or at any moment during the test (topline figures). This includes but is not limited to displaying any of the seven basic emotions.
  2. Negativity The percentage of participants showing an emotion classified as negative either at a particular second (second-by-second as seen on the dashboard charts) or at any moment during the test (topline figures). This incorporates the most frequently occurring negative emotions, such as Disgust, Confusion or Contempt.
  3. Valence Valence is a proprietary metric to demonstrate how positive or negative the overall reactions of the audience are. It is, essentially Positive emotions minus Negative emotions, and helps elucidate the emotional “tenor” of the viewing experience. It shows how emotionally dividing the content is for the audience.

Triple Refinement

Both people and machines recognise emotions in the same way; separateing
background with the foreground, the ability to focus – and detect the face and shape of
it’s expression.
face-detection

Face Detection

We’ve developed our own face detector which is more accurate and reliable than the Viola-Jones detector, generally considered the industry standard.

feature-detection

Feature Detection

Once the region of the face has been detected, we need to be able to identify the landmark points of the face: the nose, the eyes, the mouth, the eyebrows.

expression-level

Expression Level

Using all the frames in respondent’s
video, we create a person-dependent
baseline, or mean face shape. By measur
ing their expressions as they deviate from
their ‘neutral’ face, accounts for people
who naturally look more positive or nega
tive and correcting any bias.

How We Compare to Humans

To review the accuracy of our motions, we plot the aggregate output of
our metrics against ground truth data – the majority voting of 5 human
annotators who notate frame by frame the emotion they see.

Tech That’s at Home in the Wild

Bad lighting, heavy shadows, thick facial hair,
glasses, typically makes emotion detection more
challenging. However, we use ‘in the wild’ datasets
that employs machine learning that teach our
algorithms to cope with such obstacles.

Get more information on how we collect
respondent data from our tech white paper.

We took it up a notch

complete-tracking

Complete Tracking

Face detection • Facial landmarks • 3D headpose • Basic emotions • Expressivity • Valence • Attention

fully-optimised

Fully Optimized

Non-exaggerated natural reactions • Complete in-the-wild environment • Video watching in mobile and desktop web browsers • Precision over speed

value-added

Value Added

Emotional events • Emotions duration, volume • Emotions by video segments • Emotions by audience segments • Interest score • Attention volume & quality • Memory & emotional hook

Reporting metrics that matter

Realeyes Score

Score combines viewers’ emotional engagement and attention levels to help maximise the ROI of media campaigns.

FIND OUT MORE

EmotionAll® Score

A snapshot of your performance per asset, enabling you to predict social media activity, and inform media distribution decisions.

FIND OUT MORE

Attention Score

See how well your content grabs and maintains audience attention to know whether they’re gripped or easily distracted.

FIND OUT MORE

Play Video
Previous
Next

Here’s the science bit

Our white paper covers the science behind our facial coding technology and how we’ve taught machines to recognise emotions, just like humans.