In this edition, we are going to take on a serious focus. We will talk about...
This week's Plus report is all about relevancy. We are going to talk about how the digital world has changed the way we define relevance, with a look at the 'circles of relevancy' as a whole. More importantly, we will look at the details that make up how we measure and improve our relevancy as a publisher.
This is an extremely important topic, because this is often the root cause of why so many traditional publishers struggle to make a big difference.
The need for relevance, however, is also kind of a weird trend, because many publishers genuinely think they are already as relevant as they can be. They will point out that their subscription numbers, their traffic, and even (in some cases) their digital ad revenue is growing, so clearly they are doing alright. Right?
And yes, all those things are great. But, even with this growth, most publishers are missing out on a much higher level of potential.
So, in this Plus report, I want to really challenge the way you think about relevancy.
I also want to talk a bit about the future of machine learning and how it will impact journalism.
This is a really fascinating (but for some, scary) trend that is happening in our world. We are entering a new era where we, as humans, are incapable of seeing what the computers can see.
In the past, we were always in control. We were the ones who told the computers what to do.
Take a simple example. In Excel we have this useful feature called 'conditional formatting', where you can define when Excel should do something, like coloring a cell in a different color, when a predetermined outcome is met.
In this case, I have told it to color all cells yellow that have a number higher than 150.
It's a very simple form of code, and more to the point, as a human, I am able to verify that it has been done correctly. I'm also able to tell that, in this case, there is an error and that it has also colored the number 23 when it shouldn't.
Of course, this is just an example that I set up, because Excel doesn't actually make this error, but my point is that, as a human I'm always the smart one. I can define the code and I can verify that it works.
This is how we used to think about computers.
But what is happening now is that we are moving into a new era, where it's the computers who are the smart ones, and we humans wouldn't be able to verify that the computer is doing its job correctly even if we looked.
Let me give you two examples:
One example is with the recent advancements in how machine learning algorithms are able to detect cancer at a far earlier stage than anything we humans have ever been able to do.
This is a massively important advancement in medicine that has the potential to save millions of people, but think about what is actually happening here.
If the computers are able to detect cancer at the point where we humans can't see it. How do we know that it's really there? Even if the computer tells us that it has found an early stage of cancer, as humans we would have no way to verify it, because our 'ancient' human tools and eyes can't see it yet. To us, it still just looks like healthy body tissue.
Another example is from the world of astrophysics. Earlier this year, several newspapers reported about this phenomenon called 'Fast Radio Bursts', which many reported "may be coming from aliens" (seriously, journalists... get a grip on yourself).
Anyway, as far as we know, Fast Radio Bursts are extremely brief and very strong signals that are coming from outer space. And the way we used to detect these was to analyze the incoming data to look for something like this:
This is what a Fast Radio Burst looks like. It's a very visible line in the frequency band, with an equally easily detectable spike. And you can see why some people might look at this and think it isn't a natural phenomenon (OMG... aliens!!).
But this is the old way of looking at things. What has happened since is that astrophysicists have created a machine learning algorithm to speed up and automate the detection of these things. So they trained the computer to spot them using a neural network type of algorithm, and then they set it to work ... and within just five hours it had discovered 93 more, a massive increase compared to how often we thought these things occurred.
But more to the point, what the machine learning algorithms detected was this:
BTW: The picture above of the Fast Radio Bursts is from this video from the immensely talented Dr. Becky Smethurst that explains the science behind all of this.
All of these are verified Fast Radio Bursts, but just look at them. While a few of them have that distinctive line that you saw before, most of them have not ... and many of them just appear to be noise to our human eyes.
In other words, this is another example of things that we as humans cannot detect.
But more to the point, once all this new data came in, the astrophysicists realized that these Fast Radio Bursts had no discernable pattern. We used to think it was a very defined thing, but now we know that they are just random. There is no pattern in where they are coming from or how strong they are.
We still don't know what is causing them, but think about how much this story changed.
This is the new era that we are moving into, and it's going to have a big impact on journalism as a whole.
There are specifically three things that will change.
First, journalists would need to change their perception of how problems are solved. Today journalists are frequently demanding companies to 'add humans' whenever something bad is happening. For instance, when a journalist comes across a bad video on YouTube, many say that Google needs to hire more human moderators.
But this doesn't work anymore, because while we humans might be able to detect some of the more obvious things, there are thousands of others that we can't.
But also think in terms of detecting things like cancer. It is highly likely that the algorithms will occasionally get this wrong and misdiagnose someone who is perfectly healthy. But the answer to that isn't to 'add more doctors', because remember, the computers are detecting these things long before we humans can even see it.
So, if we just add more doctors, we might have not mis-diagnosed this one person, but we would also have missed out on 1000s of other people who would now be at a much greater risk of dying because we discovered their cancer too late.
The answer is not to 'add more humans'. That's the old way of thinking.
Secondly, we are going to have a really big problem with how to verify things in the future. If the computers can detect patterns far earlier than what any human can see, as journalists we won't be able to verify it until much later.
This is a real problem in today's world of media where everyone is trying to 'be first', and where so much of our news is reported as a kind of live stream of ongoing events. The risk here is that, as journalists, we end up becoming conspiracy theorists, because we keep reporting things when we don't know what they are, hoping that we might turn out to be right.
This is not a good thing.
Finally, there are also a lot of good things that will come from this, specifically in helping journalists and editors plan what to cover, how to cover it, and also fact-check it.
In the future, having a newsroom-assistant with machine-learning will play a bigger and bigger role in what stories we tell. Just think about the example of Fast Radio Bursts above. Everything in the Guardian article is basically misleading and out of date because it only looks at what we humans can see.
This is the future we are heading into.
Finally, I want to give a shout out to an article I read recently over at BBC Future about the 'perils of short-termism'.
It's about how today's media focus is forcing people into being outraged, and worrying so much about the present that we lose the ability to see the future.
The article covers a lot of ground, and it's something that I absolutely agree with. As a media analyst, I'm seeing a very strong growth in news fatigue and news avoidance, which is linked to this phenomenon that the BBC calls temporal exhaustion (we are exhausted by time).
Modern society is suffering from 'temporal exhaustion', the sociologist Elise Boulding once said. 'If one is mentally out of breath all the time from dealing with the present, there is no energy left for imagining the future,' she wrote in 1978. We can only guess her reaction to the relentless, Twitter-fuelled politics of 2019. No wonder wicked problems like climate change or inequality feel so hard to tackle right now.
It's a very interesting article, and I do believe that this is exactly the problem that we see in so many places today. How can people have hope for the future if they are just blasted away with never ending outrage for the present?
And more to the point, what happens to a society where the present day outrage isn't even an accurate representation of what the world is really like?
This is something we need to change, and as publishers, we are at the front line of this. This is our job to solve.
Of course, this also links back to this week's Plus report about relevancy. So take a look at that, and... let's change the world! ;)
The more we use automated tools, the more important it becomes to also create 'originals'
Young people will cancel and come back later... if you let them
Everyone is talking about ChatGPT and MidJourney, but their size is also their downside.
Why BuzzFeed News failed, and the coming increase in news avoidance
Founder, media analyst, author, and publisher. Follow on Twitter
"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé