Sorry, we could not find the combination you entered »
Please enter your email and we will send you an email where you can pick a new password.
Reset password:


By Thomas Baekdal - December 2021

How newsrooms should think about trust. And let's talk about social media studies

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.

Welcome back to the newsletter. Today is the second to last newsletter of 2021 (one more coming before Christmas), which is just crazy. Anyway, today, I have a number of exciting things to share with you.

A look at how newsrooms need to define trust in journalism

There is a new report out from Reuters Institute about trust, how newsrooms are talking about it, and what they are planning to do about it.

It's a really great report (Reuters Institute is always doing a great job), but there is a problem with how publishers talk about trust. Trust is not a thing, it's not a feature, it's not something you do as part of your subscription campaign. Instead, trust is about something much deeper.

In my latest Plus article, I have put together a model for trust where I show how publishers should think about trust, and how you can lose at every stage of this model.

Read it here: Let's fix the problem around trust in the media

How not to do social studies

As a media analyst, I obviously welcome as many studies as we can possibly create about how people consume and engage with media, including, of course, how people use social media.

However, over the past several years, and increasingly in 2021, I have seen a number of studies that are done in a highly misleading way, which have also often been covered by the news.

The people who conducted these studies weren't trying to be misleading, but there is a fundamental flaw in the way they did it, and what conclusions they came to because of it. This is bad because, not only does this mislead the public, it also creates problems that impact social media channels, and the press as a whole.

The studies I'm talking about are those focusing on 'filter-bubbles' or other forms of online harm.

Let me explain:

Almost everyone who has done a study about filter-bubbles, makes the same mistake.

What they do is this:

Blumenthal says he and his staff created an account pretending to be a teen interested in eating disorders and within an hour "all of our recommendations promoted pro anorexia and eating disorders" notes it's the same experience they had 2 months ago when Antigone Davis testified.

Or this:

TTP researchers registered new Google accounts and used a clean browser to watch a collection of YouTube top news videos from either Fox News or MSNBC. They then watched the first 100 videos that YouTube recommended via different starting points on the platform.

They then found that Fox News was recommended more than MSNBC.

Or this:

TTP also found that YouTube's feedback loop can encourage extremism. After searching for info about US militia groups, researchers were recommended videos that teach tactical skills like operation security, how to use military equipment & homemade weapon building.

All of these sound really bad, and because of this, it has been covered extensively by the press.

So why is this a problem? Well, because you cannot do a study like this, and they didn't find what they think they found.

The mistake they all make is that they are specifically trying to create a filter-bubble. In other words, they are focusing the social channels to create a filter-bubble around them.

Take the first example. Here some politicians created a social media account, pretending to be a teen, and then they specifically started looking for posts about eating disorders. And, as they say, within an hour, the social channel started showing them other posts like it.

The key phrase here is "within an hour".

This is a curious way to phrase that. It implies that before this hour, Instagram was actually showing other things. But, instead of looking at those, they kept pushing Instagram into the filter-bubble that they wanted to create, until eventually (about an hour later), Instagram started to show them posts like that.

In other words, it's not Instagram that is creating the filter-bubble. They created that all by themselves. Instagram, on the other hand, spent the first part of that hour trying to get them to see other things, which they ignored.

You can't conduct a study like that. This is not a valid way to measure how people interact with things.

The best analogy I can give is to think about a supermarket. Imagine if you walk into a gigantic supermarket, but you specifically ignore everything there, walk straight over to the candy section, open your eyes, and proclaim:

Look, this store only has candy, and even if I look slightly to the left or right, all the other products I see are full of sugar too. This supermarket is bad because it doesn't promote healthy living. We need to regulate them.

You can't do a study like this.

It's the same with the other two examples. Let's look at the one about Fox News and MSNBC. Again, they started out putting themselves into a filter-bubble. YouTube didn't do that. They did. And then they found that they were recommended more videos from Fox News than from MSNBC.

Two problems here:

First of all, neither MSNBC or Fox News are big channels on YouTube (at least not comparable to how big they are outside of YouTube). Sure, they have a few million subscribers overall, but when you look at their engagement metrics, you see that very few people actually watch their videos.

Look at this, both of these massive old media channels have ... only a few thousand views per video. That's nothing. In fact, on YouTube, this is embarrassing.

To give you a simple example, here is a screenshot from Rachel K Collier, a Welsh Electronic producer and performer. She is not a big media company, or even a big celebrity. In fact, her channel only has 125,000 subscribers ... but look at her video views.

She is outperforming both Fox News and MSNBC.

Think about how insane that is.

But what this tells you is that people DO NOT go to YouTube to watch MSNBC or Fox News. That's not what people use YouTube for.

What people actually do is go to YouTube to watch Rachel, or Sailing Uma, Morph, Aquaholic, Beau Miles, CarWow, Foil Arms and Hog, JANGBRiCKS, or thousands of other channels.

In comparison to YouTube's total size, Fox News videos are not widely seen.


And this makes the whole study irrelevant. They decided to measure a part of YouTube that isn't representative at all of how people use YouTube. It's an invalid focus.

Secondly, the problem with their findings. They discovered that Fox News is being recommended more than MSNBC, and this narrative was even featured on CNN.

The problem with this narrative is that it's not YouTube that is doing this. This is how it is everywhere. For instance, if we look at primetime cable TV viewing, we see this.

So the fact that Fox News content is seen more than MSNBC is not because YouTube is skewing towards right wing content. Rather, it's just a representation of what the public is spending time on everywhere.

We can still argue that this is bad (and personally I think it is), but that's a completely different discussion. Because now you are saying YouTube should be skewed to prevent this, whereas before (in the study) they were blaming YouTube for causing the skew.

That's two very different discussions!

And then we have the third example, where they found this:

TTP also found that YouTube's feedback loop can encourage extremism. After searching for info about US militia groups, researchers were recommended videos that teach tactical skills like operation security, how to use military equipment & homemade weapon building.

First of all, we have already talked about the problem of putting yourself into a filter-bubble, which again they are doing here. They were searching for this ... and then YouTube started showing them things related to that.

This is not YouTube causing them to be in this filter-bubble. They put themselves into it.

Of course, you can look at this and say: OMG why is this even on YouTube? They should take that down!! And, as a European, I agree. I think these videos are awful. But, what you have to remember is that, in the US, this is legal publishing.

Right? It's not just on YouTube you can find content like this. There are US magazines which publish the same things.

So, should the US also legislate to end these magazines? Again, personally, as a European, I do think magazines like these should be banned from society. But that's a very different discussion than "OMG, look what we found on YouTube".

But the algorithm...!!

Of course, this then links to the algorithm. In the study, they claim that it's the YouTube algorithm that is to blame for showing them this type of videos, and for creating the filter-bubble. But that's not actually what they found.

Take a look at this screenshot again:

What the researchers claim is that YouTube's algorithm is promoting this type of content, causing people to watch this. But ... look at the view numbers. These videos have almost no views. There are a few of the reviews that have slightly more views than others (same type of reviews you see in magazines), but the 'bad' content has very low numbers.

Look at the video at the top. It has 111 views, and was posted more than a year ago. That means that it got less than one view per day. There are literally no people watching this. In the US, only 0.00003% of the population has watched this.

So, you can't claim YouTube's algorithm is 'pointing people towards this' because it clearly isn't. If YouTube's algorithm was truly doing this, it would have had millions of views instead.

What happened instead was that these researchers searched for this, and kept narrowing that search until they got to a corner of YouTube that was so infrequently used that almost nobody had watched the videos they found.

That's not the algorithm.

But, then also look at the last video in that screenshot. Here you find a video from ... Vogue? Wait... what?

Why is there a video from Vogue here? They were not searching for this. What's going on?

Well, here you see the actual algorithm at play. YouTube's algorithm realized that this person was spending time on YouTube with content that had almost no engagement, so the algorithm found a video that was far more popular, and is now trying to get them to see that instead.

YouTube is saying: "Hey, we can see you have spent the past hour only looking for weird videos that very few people ever watch, would you like to see this far more popular video instead?"

So this study is useless. They didn't find a filter-bubble, they created one. And it wasn't the algorithm that put them there, their own data actually shows that YouTube's algorithm is trying to get them to watch something else instead. And the bad videos they found had so few views that it indicates that it's not something that a large group of people are watching.


And it's the same with the other studies. None of them are finding the problems that they are claiming to exist. On Instagram, if it takes you an hour of meticulously putting yourself into a filter-bubble, ignoring everything the algorithm is showing you along the way to watch something else ... then you can't blame the algorithm for that.

It's just ridiculous. These are invalid studies.

We do need more studies

As I said in the beginning, I do welcome more studies, and I would love to see a far more in-depth analysis of how the algorithm influences our media consumption. But you have to do it with a realistic baseline.

Take the example of US Senator Blumenthal above. They created an Instagram account 'pretending to be a teen' ... well, okay. That's good. But then you also have to 'act like a teen'.

In other words, you need to start following accounts that are popular with teens. Like, you follow popular fashion Instagrammers, Teen Vogue, a few of your friends, maybe some family members (or maybe not). You might also follow people like @vanessa_vash, @TheAmandaGorman and others that inspire you.

And then you start using that account. Not just 'within an hour', but over weeks to a month. You engage with things, you share things, you do all the normal things teens do.

... and then, once you have done all that, you look at the algorithm. What is it actually doing now? What is it recommending? Which channels is it promoting more than others, and what is it deemphasizing?

And if you then find that the algorithm is doing something bad, then we should talk about that. We should hold the social channels to account, and we explore the consequences of that as journalists.

But we should explore the other elements. Over the past year, I have seen many politicians claiming that Instagram should completely close down for teens. I don't like this narrative, because being social online is really important for the development of teens.

We live in a connected world.

Don't let the algorithms control you

While it may sound like I just defended the algorithms, I don't really like the social algorithms. I don't see the problem that the studies above think they found, but the problem with almost all of them is that they optimized for the wrong things. Social channels want wide engagement, which leads to people actually feeling less connected and more shallow.

We see this all the time. If you create a page on Facebook, the algorithm is actively trying to get people to watch other things instead. And within a short time, only a tiny fraction of the people who choose to follow you even get to see your posts.

That's not good, and it's one of the reasons that I have stopped using Facebook. I got tired of following things on Facebook and then not seeing it.

So, I recently came across a podcast featuring David Hewlett (actor, famous from Stargate Atlantis), where they were talking about algorithms. He was talking about the advice he gave to his son, which was this (I'm heavily summarizing and paraphrasing it, since the actual discussion was five minutes long):

Don't let the flicking of channels (or the algorithms) tell you what to watch. Decide what you are interested in and want to know more about, and seek that out directly.

You can watch the full thing here (the algorithm part starts at 13:30), but I think this is excellent advice.

This is to me the most important thing we need to teach the public. The problem with social channels is that, if you are not specific with how you use them, they turn into low-intent micro-moment sinkholes.

You end up 'flicking the channels', endlessly watching things that really don't have any value to you. You are just wasting time.

(We use the term 'flicking channels' because it's the same thing older generations do when they spend 4.5 hours in front of the TV every night.)

So, instead, pick your channels, and be specific about how you watch them. And when the algorithm tries to show you something 'stupid but fun' ... ignore it.

If you do this, things start to work for you. I use YouTube more than any other channel, but I use it in a very specific way. And because of this, 80% of the recommendations are useful and valuable to me. But you have to work for it.

I don't have kids, but if I did, this would be what I would teach them. Don't let the algorithm define what you see. Be specific, and define what you want from the algorithm instead.

And BTW: This doesn't just apply to YouTube. It also applies to every other form of media ... including newspapers. Don't just let the endless stream of negative political news flow over you 24/7. Take charge and create a more specific form of news consumption for yourself.

Want to know more?

Don't forget to check out the paywall report in my new 'known to work' series, where we explore strategies that we have clear evidence for.

And also, remember the article about trust I mentioned above:

Support this focus

Also, remember that while this newsletter is free for anyone to read, it's paid for by my subscribers to Baekdal Plus. So if you want to support this type of analysis and advice, subscribe to Baekdal Plus, which will also give you access to all my Plus reports (more than 300), and all the new ones (about 25 reports per year).

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.


The Baekdal Plus Newsletter is the best way to be notified about the latest media reports, but it also comes with extra insights.

Get the newsletter

Thomas Baekdal

Founder, media analyst, author, and publisher. Follow on Twitter

"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made ​​himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé


—   newsletter   —


The Audience Relevance Model


The future outlook of the brand+publisher market


Can magazines mix advertising and subscription? And what about password sharing?


What happens when you ask an AI to do media analysis?


Operational security and the dangers of online sharing for journalists


How to think about AI for publishers, and the end of the million views