We live in the age of distraction and attention is the gatekeeper to an ad’s success. Without audience attention, everything else is secondary.
Realeyes’ AI-powered platform is the most sophisticated in the market, using webcams to interpret audience’s attention the same way a human would do.
Audiences opt-in via their PC or mobile devices, enabling the use of their cameras during the test to classify their behavioural cues such as eye movements, blinking, yawning and distracted head movements. Using our ground-breaking technology, machines have learned the same things that the human brain instinctively processes to decide whether someone is paying attention or not.
Realeyes’ attention metric is the most sophisticated on the market, tracking more behavioural cues than any other solution.
Our attention solution can test multiple creatives at once on a global scale at machine speed. This scale and speed simply isn’t possible with other solutions that rely on monitoring participants’ brain activity or biometrics.
Our attention metric is just 4% shy of being as good as humans at telling if someone is paying attention or not. Our machines are continuously learning to close this gap and even exceed humans.
Unlike a lot of solutions on the market, which rely heavily on eye tracking, our solution measures participants’ attention the same way a human would do: using a set of subtle behavioural cues that the human brain instinctively processes to make up its mind about whether someone is paying
Get two measures of attention to gauge the quality and volume your content will receive:
Attention volume quantifies the extent to which the audience paid attention to the video content; it shows the average volume of attention respondents paid throughout the viewing experience.
The more attention a video managed to grab from its audience, the higher this score will be.
Attention Quality, on the other hand, provides an indication of how long an audience was capable of maintaining continuous attention, shown as the proportion of the entire length of the viewing.
This differs from Attention Volume, since the value of the Attention Quality score is dependent on how the attention of the audience was distributed along the viewing (quality versus quantity) Attention Quality decreases when respondents had short attention spans, getting distracted regularly; and increases when the audience is attentive.
Currently, our human annotators agree with our attention classifier 86% of the time – only 4% away from the average human level of precision (who agree with our ‘ground’ truth data 90% of the time).
We’re very close to be able to teach computers to measure attention as well, if not better, than humans.
The harmonic mean of the sensitivity of the model ( how often frames with attention are correctly picked by the classifier) and the precision (how often the frames picked as ‘attentive’ by our classifier are considered ‘attentive’ by annotators too). This score is very useful in understanding how well our classifier identifies attention levels.
We favour the Matthews Correlation Coefficient (MCC) to assess the performance of our classifiers. This is because this metric account s for the relative proportion of all possible outcomes in predictions (true positives, true negatives, false positives, false negatives). If a classifier is great at predicting an outcome that happens often, but poor at predicting the rare event, the MCC will reflect that much more than other measures of accuracy.
This metric computes how often the frames pick ed as ‘attentive’ by our classifier are considered ‘attentive’ by annotators too. The higher the precision, the more we’re certain that frames seen as ‘ attentive’ by the classifier are in agreement with human annotators’ judgement.
Our white paper covers the science behind our Attention Metric and how we’ve taught machines to recognise attention, just like humans.