Welcome back to the Baekdal/Basic newsletter. In today's newsletter, we are going to talk about misinformation. I have three stories that relate to this.
In 2015, our world changed. We came from an internet where there was the occasional misinformation, but it happened mostly on obscure sites that few ever used. But then, as the 2016 US presidential election came up, suddenly misinformation had gone mainstream, and, as the press, we started to pay much more attention to it.
This was both good and bad. It was good because there was indeed a problem, and explaining that to the public potentially helped with public awareness. But it was also bad because all our press focus had the nasty side effect of propelling misinformation to far more people. So, something which originally existed only for a very narrow and inconsequential audience, suddenly became a mass market thing. In other words, most people did not hear about the misinformation via social sites or the internet as a whole. They heard from us.
With this also came the problem that many people in journalism have a very distorted idea of 'impact'. We would see examples of journalists who would find some misinformation on YouTube, Twitter, or Facebook and report about it as if it had single handedly skewed the entire Presidential election.
As a media analyst, this was incredibly frustrating to see because when we looked at the data, it was painfully obvious that those videos, tweets or posts, did not do that at all. Most of the examples in the press were so pointless that if it had been a brand doing the same for the sake of selling more products, it would have been a monumental failure that would not lead to any extra sales.
But what is really frustrating is that, even though we have pointed out this problem for almost a decade now, it's still happening.
I want to give you a practical example:
One of the largest news sites in Denmark recently ran a front page feature investigative story, revealing that Russian Intelligence were running Twitter accounts in secret, with the aim to divide the public.
Here is a screenshot which I have run through Google Analytics, for your convenience:
In the article, the journalist included several tweets that reveal this level of misinformation, and it all sounds very scary.
Okay... so, as a media analyst, I look into this. I looked up this specific Twitter account, I found the specific tweets mentioned in the article ... and ... hmmm.... 106 views since February 10, 2023:
I'm sorry... what? 106 views in three months, knowing also that a view on Twitter isn't even necessarily a person. What kind of nonsense is this?
To put this into perspective, here is a tweet that I posted two hours before I wrote this newsletter. It has 408 views.
So, are you telling me that a tweet with only a quarter of the views of my random garden tweet is apparently able to divide the Nordics? Well, if that is true, then my tweet must have convinced all of Denmark to also replant their gardens.
Here is another example. Here is a tweet from the Associated Press, which in just the first 10 minutes after it was posted, gained 70,400 views ... that's 664 times more views than the one posted by Russian Intelligence.
So, if a tweet with just 106 views is able to divide the Nordics, imagine how much a tweet with 664 times that amount of views would do. The one Associated Press tweet must have put everyone into a complete state of depression.
And here is another example:
Marques Brownlee, a popular tech YouTuber, posted this meme, which in four hours gained 420,700 views ... almost four thousand times the amount of views.
You see the problem here? 106 views is nothing... as in literally nothing. That article about how Russian Intelligence is trying to divide the Nordics found nothing. That Twitter account they discovered is so tiny and so ineffective that it has zero impact on anyone. What we have actually found is a misinformation campaign that is such a gigantic failure that it hasn't even managed to get as many views as some random dude planting some hostas (that's a plant) in his garden.
But what I find astonishing is that, in the press, this is still happening. It's been almost a decade, and we see this problem again and again, and yet... It continues. The example I used here was featured on one of the largest Danish news sites at the very top of the front page as the most prominent story of that day ... and nothing in it is accurate. We are making it sound like Russia is somehow able to do all kinds of things, when the data clearly shows that they are utterly failing to use social media to influence the public.
There is no story here.
The fundamental problem, though, is that there is a lack of understanding of how conversions work. On the basic level, a misinformation campaign is no different from a brand trying to get people to buy their products. When you post a tweet, people go through a funnel.
So what do we know from brands? Well, there is no exact science here because it depends on each individual moment, but overall, we know that about 0.7% go from step 1 to step 3 ... and, if the tweet is relevant, about 2% goes from step 3 to step 4.
Again, we don't have precise numbers for any of this, but if you were a brand, that would be the conversion that you might get.
In other words, if we start out with 10,000 views, 70 people reach step 3, and 1.4 people reach step 4. Right?!?!
This is why brands buy programmatic advertising views in the millions. You need a shitload of views to get any useful amount of conversions, and very often even that is not actually worth the money spent.
So imagine how little an impact a tweet with 106 views has. It's literally zero people. I mean, let's do the math: 106 * 0.007 * 0.02 = 0.015 people.
Please, newspapers. Stop doing this. Your reporting is more misleading than the Twitter channel that you found.
Of course, I'm not saying that misinformation isn't a problem. We all know it's a problem. We can see it's a problem every day, and we can even measure it. I have talked about it too many times before.
One simple example is just to look at crime. When we compare what people think is happening (their perception), we see that the majority of the public think crime has gotten worse, and that this perception has persisted for decades.
In comparison, in reality, there has actually been a rather significant drop in crime during the same period.
So, we know that the public is misinformed about many things. How much and how many that are misinformed depends on the topic. Climate change is an example where a lot of people used to be misinformed, but today, most people are actually starting to be well informed. So that's good.
But how does misinformation actually happen? Well, there are three fundamental ways.
The first way is from influential individuals, celebrities, or government leaders. If you look at many of the studies and the work that researchers and media analysts have done, you often come across the phrase: "Misinformation comes from the top".
In other words, if someone really popular starts saying some nonsense, many of that person's followers will believe in it. I mean, look at Fox News as one of the worst institutionalized examples of this over the past decade. Same thing if it is a government leader who tells you a lie, especially if that leader exists in a country that is heavily divided politically. Or worse, if, say, the national health authorities go out and tell you that you don't need to wear a mask in the middle of a pandemic.
All of this, and many other similar examples, is one of three primary sources of misinformation. It's not from some obscure Twitter account that nobody has ever heard about with tweets only getting 106 views. Instead, it's coming from the top.
The second reason why misinformation spreads is about flooding people with a negative view about a topic. Why is it, for instance, that over the past two decades, people have persistently believed that crime is getting worse, when reality shows the opposite?
Well, we know the answer to that... because of things like this.
This is a screenshot from the US section of CNN, and I highlighted every story that is about crime. And it's like this every single day, across every single newspaper. So why shouldn't the public think that crime is getting worse when we are constantly bombarding them with it?
It's the same about immigrants, and so many other topics.
And the final reason why misinformation forms is because of confusion. This is how propaganda works. If you want people to lose trust in something, you don't need to tell them a lie, instead, you have to flood them with so many opposing points of view that people lose the ability to tell what is true or false.
We saw this exact effect very clearly during COVID. The more we confused the public about what was true, and the more the experts seemed to disagree, the more misinformation we saw all around us.
And once people get to this point, things go bad really quickly. Because if they are so confused that they can't trust the authorities, the experts, or the press ... that's when we see people coming up with their own ideas. Suddenly, misinformation starts to grow from within, and then people become very easy to manipulate.
These three elements are, by far, the primary reasons why so many people are misinformed. It's not some random tweet with 106 views. That's irrelevant. This is where we need to focus!
The final thing I want to mention is a study from Katie Mack. She is a cosmologist, and she ran a very interesting poll the other day.
She asked "If something you're sharing online turns out to be fake/false, do you appreciate people who let you know that?" ... and the result, as you can see below, was very clear.
Now, normally, I wouldn't pay too much attention to polls done by individual people on Twitter. It's not exactly a balanced audience, and the margin of error is likely massive.
But there are two things that made this stand out. One is the number of participants, 9,232 people, way more than most political polls usually reach. And secondly, how clear the outcome was. Even with a potential bias, you would not see an outcome this clearly.
And the result is really important, because people are very clearly saying that if you share something online that is not true, you need to correct it.
But this is not just about normal people posting things on Twitter, this also applies to the rest of us. If you are a newspaper and you report something, which later turns out not to be true, this poll very clearly illustrates that the public wants you to correct it. Not just move on to the next story, and not just interview someone with an opposing view.
Tell people that it was not true, and correct it in as prominent a way as the original story.
I point this out because this is a problem I see all the time in the press. We are very good at pointing out when others make a mistake, but we are also notoriously bad at correcting our own reporting.
Here is a screenshot from the same large news site that I mentioned in the beginning. At the very bottom of the page, they have a section called "errors and facts", where they tell people that they made a mistake. This is not "correcting your mistakes", this is hiding them.
But look at the poll result. The public wants us to correct mistakes, to make it clear what is true and what is not. Listen to them. If we want to regain trust in the news, and also minimize the problem with misinformation, tell people when you report something that turns out not to be true in a clear, visible and featured way.
What happens to the future of news if everything just becomes an opinion?
The more we use automated tools, the more important it becomes to also create 'originals'
Young people will cancel and come back later... if you let them
Everyone is talking about ChatGPT and MidJourney, but their size is also their downside.
Founder, media analyst, author, and publisher. Follow on Twitter
"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé