Sorry, we could not find the combination you entered »
Please enter your email and we will send you an email where you can pick a new password.
Reset password:
 

free

 
By Thomas Baekdal - May 2019

The problem with news-avoidance, what about the Guardian model, and more

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.

In this edition:


Podcast: The trends around news fatigue and avoidance

Over the past several years, we have all seen the growing trend around news fatigue and avoidance, and on a more personal level, I have several friends who have become complete news avoiders.

As a media analyst, this worries me, because it represents an existential threat to our role in society. And more than that, it's a trend that we can't just ignore until it really becomes a problem.

To learn more about this, I decided to do something crazy. I decided to try it, to not read news. Initially it was only supposed to be for a week, but it ended up being a full month.

In my latest podcast (which you can also read as a normal article), I talk about this experience.


The Guardian breaks even

As you have probably heard already, the Guardian has finally been able to turn things around, which is just wonderful. As they said:

Guardian News & Media recorded an £800,000 operating profit for the 2018-19 financial year - compared with a £57m loss three years previously - ensuring the business is existing on a sustainable basis following the culmination of a turnaround programme put in place following years of substantial losses.

The company said it had 655,000 regular monthly supporters across both print and digital, with a further 300,000 people making one-off contributions in the last year alone.

And several people have asked me to comment on this as a media analyst.

I always find this to be difficult, and it's the same when people ask me to comment on how the New York Times is succeeding.

There are three stories here.

The first story is the success itself, with how the Guardian (and NYT) has been able to turn things around, and how amazing that is. As a media analyst, I'm absolutely thrilled to hear this, and I think they have done an amazing job.

I only have admiration for what they have been able to do. It's just great!

However, the second story is when you start to ask me if other publishers could copy this success for their publications ... and that is far less certain.

The Guardian created a model around membership, where only about 2.5% of their audience is paying on a regular basis, with another 1.1% giving them a one-off contribution.

That's still a very low conversion rate. It's the same story we hear with the New York Times. It's amazing how many subscribers they now have, but compared to the whole, it's still a very low conversion rate.

So, this model has worked for these newspapers because of how big they are, but if you were a smaller newspaper, would you still be able to do this?

Maybe you could, but the financial situation would not be the same because of the difference in scale. For instance, some of the Guardian's biggest successes are some of their big investigative stories. Stories that required a substantial amount of resources over a long period of time. But if you are a local newspaper and only 3% of your audience pays, you might be able to make it work financially, but you wouldn't be able to dedicate the resources in the same way.

So, I love what the Guardian has done, but I'm not sure it's reproducible in the same way.

The third story is about the trends. One of the things that troubles me is that the news industry still hasn't changed at all. True, many newspapers have changed their financial reality. They have cut costs to make them more nimble and optimized, and they have started asking people to pay for news, and it's about time.

But I see no actual change in the model of news overall. Generally, all the traditional newspapers are still defining who they are and what they do exactly the same as for the past 150 years.

This worries me, because the trends are still changing. We see a massive difference in how people consume content online, and no, I'm not talking social media here. Social media generally is a distraction. I'm talking about how we (in a connected world) don't need news the same way we used to (in the disconnected world). But the way newspapers report things is still fundamentally based on that old world.

And the problem that I see is that now that the New York Times and the Guardian are succeeding, other newspapers look at this and think: "Hey, they are working, and they are doing journalism like we have always done, so we don't have to change anything either."

But you do. If you are a smaller newspaper, or even more importantly, a local newspaper, your problem isn't that you don't have a membership model like the Guardian, your problem is that the random package of news you create isn't relevant for the time that we now live in.

So, again, I love what the Guardian has been able to do, but I shudder when I hear other newspapers talk about them as the solution to their problems as well.

My advice hasn't changed. I think it's great that this model works for the Guardian, but I don't think it can be directly copied by other newspapers. You still need to define your own model that works for you.


Can't we just add humans?

I sometimes tweet about how difficult it really is for tech companies to filter out bad content or bad people from their services, and one of the most frequent comments I get is "yes, this is why we need human moderation".

The answer to that is no. We can't solve the problem with more humans. Humans today make far more mistakes than the algorithms. If you were to replace the algorithms with humans, things would not be better, they would be worse. And that's without even discussing the obvious problems around mental health issues, from subjecting humans to an endless stream of bad content (which you don't want to do).

But let me explain why:

First of all, I see a lot of journalists writing about technology as if it's magic, and the only reason it isn't perfect is because Google or Facebook are holding it back.

Just look at how we reported about the terror videos on Facebook after Christchurch.

The reality, however, is quite the opposite. The tech companies have made a lot of advances in recent years, largely due to the use of machine learning, and today our level of technology is able to identify bad videos in 70-90% of the cases.

But that is only when the algorithms know specifically what to look for. Like when someone reports a video to Facebook, and they then 'fingerprint' it for the algorithms to look for similar instances.

If the algorithm doesn't know what to look for, its detection rate is much, much lower than that. And this is where we are today.

It's not like Facebook has an algorithm that works 100% of the time, but Zuckerberg has decided to dial it down to 70%.

So, it makes no sense for journalists to report it, or to quote a politician saying that "Facebook should just make it work". We don't have the technology to do this, and the closer you get to 100% accuracy, the harder it gets.

In all likelihood, we will never reach 100% accuracy. We might get it to 95%, but that still means that if 1 million people upload a bad video, 50,000 videos will not be instantly detected.

This is the reality of the world, and no level of wishful thinking or political pressure can change that.

It's like saying that car manufacturers should prevent all cars from crashing. We don't have the technological sophistication to do this, so reporting that makes no sense. And if a politician tries to win votes by demanding it, they are misleading the public.

So, at this point, people start to say: "Oh, but wait a minute. They should just hire human moderators!"

But no... that doesn't work either.

The first problem is that humans don't scale. If you have a terror attack where a large number of people and bots start to upload a million variations of a video, the number of human moderators you would need to respond to that in real time would be in the 100,000s.

It's just not possible. Humans don't work at scale.

The bigger issue, however, is that we humans are terrible at content moderation in general. We make far more mistakes than computers, and one easy place to see is with copyright.

Imagine that YouTube hired a human to identify whether someone uploaded a video that they didn't own. The human would be easily able to do this in all the most obvious cases.

For instance, if the human saw this, they would instantly recognize that this Fujitora person probably doesn't hold the copyright for Die Hard.

It's the same if the human saw someone upload something from Disney. The human would instantly recognize this, because they know what Disney looks like.

But the human would fail in every other case. For instance, imagine that a human moderator saw this video. Is that a stolen video? Did Suibhne make this?

You see the problem? As a human, you have no idea. You could probably spend an hour figuring it out, but that's not good enough. So humans are terrible moderators, because they can only detect the most obvious cases, whereas the computers are already much better, and can detect 70-90% of all the videos.

The problem is that it's not an equal overlap. The things that humans are good at are often the opposite of what computers are good at. So, sometimes we see something completely obvious that a computer has missed, and we get confused as to why the computer couldn't detect this completely obvious thing.

But the computer is actually saying the same thing. There are far more things that it can detect but that we humans completely miss.

I'm not saying that the internet is perfect. It obviously isn't, and there are several problems that we need to get better at. But the narrative that I see us doing in the press is misleading. We only look at the small part of the world that the computers missed, and we are writing stories about: "OMG, look what happened when I searched for this specific thing on Google! Why didn't Google fix this?"

The answer was that they already did ... in 90% of the other cases.

What we really need is a different type of narrative. We humans can't do this any better, nor can the computer be 100% accurate all the time. So, focusing on either doesn't get us anywhere.

Instead, we need to talk about why some of the problems we have today exist in the first place.

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.

 
 
 

The Baekdal Plus Newsletter is the best way to be notified about the latest media reports, but it also comes with extra insights.

Get the newsletter

Thomas Baekdal

Founder, media analyst, author, and publisher. Follow on Twitter

"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made ​​himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé

 

—   newsletter   —

free

newsletter:
The Audience Relevance Model

plus

newsletter:
The future outlook of the brand+publisher market

free

newsletter:
Can magazines mix advertising and subscription? And what about password sharing?

free

newsletter:
What happens when you ask an AI to do media analysis?

free

newsletter:
Operational security and the dangers of online sharing for journalists

free

newsletter:
How to think about AI for publishers, and the end of the million views