As a media analyst, one of the problems I often see is that the industry seems to be fixed on just doing what everyone else is doing and saying, even when the data tells them otherwise.
The latest example of this is Schibsted. In an article over at Niemanlab(which contains many good ideas), they explain how static ads work better than rich media ads in terms of effectiveness:
The study revealed that the most effective of these formats was the static image. Static banner ads had the greatest effect on a reader's preference for a brand and intent to purchase the item advertised - and those are among the categories that matter most to advertisers who, at the end of the day, want to sell something.
Readers rarely engaged with rich media or the videos, which are fairly disruptive, and only 0.3 percent ever played the video ad to begin with. 'Video is good at building awareness, but this is a very, very, very low number if you want people to actually get your message,' said Engström. Rich media ads were a turnoff: The survey revealed a negative correlation to effectiveness of the campaign after just two exposures to the ad. Static image campaigns won out again here, their effectiveness increasing pretty significantly with exposure.
Okay, great! Now we've got some real data, based on what seems like a fairly comprehensive study. So the strategy for the future seems clear, right?
But then they go on to say this:
Advertisers and the agencies that help create the ads are listening to Schibsted's findings, said Engström, whose team works across all the company's brands. For mobile, he recommends specifically short, six- to eight-second videos - never 30-second videos, which resemble television spots.
What? What kind of silliness is this?
They have found that those ads have a negative correlation to effectiveness, so why the heck would they recommend short videos as the one to design?
This makes no sense. And it's not just Schibsted that is doing this. Everyone is doing this because someone has found that videos ads produce the biggest numbers when you measure the wrong things (AKA single engagement metrics).
Mind you, measuring the effectiveness of advertising is always tricky because you have to take several things into account. You have to look at awareness, communicating the message, effect on likeability, retention rates and the outcome of purchase intent. And you have to measure this for not just the initial views, but across exposures and even across campaigns with more than one ad.
Of course, the problem is mobile. How do you make a big ad on a mobile screen that doesn't feel like a take-over ad? The answer is to design mobile ads the same way as you design the rest of the content. Which is exactly what all the digital natives are doing.
Here is an example from Instagram (via Ana Milicevic):
Both of these are ads, one sponsored via Instagram itself, the other sponsored via an influencer. And, as you can see, both fit perfectly with the other content of the site. They are ads, but they don't feel like ads.
But the main point here is that publishers need to take a step back and look at the larger trends, and evaluate whether those trends prompt positive results, or whether they just add to the noise and are optimized for views to the detriment of the value they create.
The only place where I have seen video ads make a real difference is on platforms where video is a natural part of the content.
A video ad via an influencer on Vine, for instance, is great (if done right). But a video ad on a newspaper site, even a mobile newspaper site? Not really.
Sure, it may create more views or even clickthroughs, but it doesn't look promising in terms of the far more important metrics such as purchase intent, real brand awareness, likeability, and the all important metric of real economic output.
Back in 2012, for instance, Google did a study looking at how DoubleClick performed using different ad formats, where they compared the effectiveness of each for both one and two views.
What they found was this:
Note: Brand awareness is whether the ad helped people recognize the brand, while online awareness is whether people remembered seeing the ad before.
As you can see, static image ads (GIF/JPEG) win when it comes to communicating the message, purchase intent and are tied with video ads for likeability (on the first view). But after two or more views, it gets complicated.
Notice also that video formats help people remember the ad itself, whereas static formats help people remember the actual brand the ad is from. That's a big difference in terms of what result you are getting.
The problem here is that we have very little control over whether people actually see an ad more than twice, and under what circumstances. So, in practice, it may be better to design five different static ads that you then expose people to, rather than just design one video ad which you then hope people will see five times.
Also, as you can see in the full details of the Google study, there are variations between industries, which means that different publishers will see different outcomes depending on what their editorial focus is. A gardening site might see different outcomes than a tech site, which again will be different from a news sites.
What Google didn't study was the impact of behavior. We know that people have different moods online. We have micro moments where we are just snacking on content because we are having a break or waiting for the bus to arrive. And we have macro moments where we are more determined in what we consume and why.
How do different ad formats impact that? Are people more willing to see a video ad if they are just browsing around for random things passing the time, or does the quickness of those moments lead to people favoring the far more efficient static ads?
If Schibsted's study is to be a guide, the answer seems to favor static ads for those moments. But I haven't seen any good studies yet that segment on people's behavior, mostly because publishers rarely know why people are coming to their sites in the first place.
The point being that we all need to be better at looking at the actual data, and not just the simple metrics or whatever tactics that the industry is excited about at the moment (and usually for the wrong reasons). If you do a study that finds that static advertising works best for your publication, why on Earth would you recommend people to focus on video ads?
Are you doing that because it's what everyone else is saying?
Schibsted's Engström is right about the need for experiments though. As WAN-IFRA reports:
If the survey results increased his skepticism about rich media (the real loser), he believes the weaker impact of video campaigns is related to the fact that they are not appropriate to the device. 'Let's try more vertical videos, 8-second formats. And maybe that will change.'
Maybe it will. And this is true for all studies. The data that we see in studies will tell us how things are today. We need to experiment in order to figure out what will happen tomorrow.
But you don't recommend brands to do short videos ads in general if the data tells you otherwise. You need to do the test first, see the result, and then see if the assumption holds true.
Founder, media analyst, author, and publisher. Follow on Twitter
"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé