I’ve seen a lot of people crying out to Facebook lately. It is hard to deny that they had a huge impact on the past US elections. They are the world’s biggest social media company. And yes, social media is still media, so they are a media company. Just that their media is crowdsourced and not produced by them.
Fake news is not created by them, but they spread like fire. After all, sometimes fake news is what people want to believe in. Everyone is happy seeing how their beliefs are publicly supported.
Facebook’s incentive is to retain users and engage them as much as possible. So they provide people with the content they want to see. So Facebook is incentivized to prioritize fake news in the users’ newsfeeds.
The thought process for the majority of people so far would be something like:
- Facebook serves fake news to their users
- The massive sharing of fake news influenced the US elections
- The outcome of the US elections was not favorable
- So Facebook must drop fake news from their ranking algorithm
Until here, the thought process seems sane. The problem is the next step.
Some say the solution is that Facebook should use a fake news detector and either ban them or alert the user.
This could be achieved today with common machine learning techniques. This seems pretty sane too. The problem is that this filtering algorithm is a double-sided sword.
Since it would be run by a central entity, you cannot influence it. They determine what’s fake and what isn’t. That is an extremely powerful propaganda tool. A filter to just remove information or make them think it is plain false.
Even if they let users customize it — which nobody would do, there is no way to prove they are obeying the custom filtering settings. This is true of any centralized service, since you cannot see the code that is running on their servers.
This inevitably creates a dictatorship. A company that is managing all your social connections, now can determine what is true or not.
Mainstream media is not always right, but they have the brand. People believe in them without even fact checking. Even some editors don’t fact check. So you could end up with some totally incorrect piece of news that this filter can mark as correct.
Or the other way around, a fine piece of news can be marked as fake and mine its credibility forever.
Citizens have a vote because they are supposed to be informed about the options.
If they are not, that is a problem in democracy, not in some private company.
Actually, by giving the power to decide what people should read to an opaque algorithm, we are making the problem worse. It makes this company take decisions for the voters, thus influencing their decisions.
We have went through why people defending that Facebook should remove fake news are plain wrong. They are not technical enough to understand the implications of that possible solution.
Nor they understand how governance works. Facebook is a private company. Seems like some people believe Facebook is some public service or something. It is not. They can build the product they want to build. As a user you don’t have any decision power.
And that’s fine! We all agreed to that when signing up and using it. So, if you don’t like Facebook, why not to stop crying and just leave it?
New, better social networks are possible. Networks that can solve the fake news problem without falling into a dictatorship.
So an algorithm run by a private company in their opaque servers can be useful, but leads to a dictatorship. And gives unprecedented power to that given company.
If we believe the solution is an algorithm, we would need it to be:
- Open source
- Fully customizable by the users
- Its defaults are consensuated among all parties using the network
- The filtering and flagging are done locally, and thus verifiable and transparent
And this brings us to something like a blockchain-based social network.
This network would be peer-to-peer. There is not central company with opaque servers.
It would be open source, so you can review the filtering algorithm you are running.
It would have a consensus mechanism so users can decide what should be the sensible defaults for the algorithm.
The incentives here would be:
- The network creators want all news to spread, so there’s more content and usage. So their incentives would be for the filtering list to be as minimal as possible. Thus this disincentivizes censorship
- The publishers would want to convince users that their content is accurate. They would put in place easy ways for people to verify the facts
- The users, for the most part, don’t care and will use the defaults. And they will rarely vote to change them. Yet, the users that care, will indeed fight for accuracy. Like how Wikipedia contributors behave
So stop crying out to a private company, and start helping alternatives that actually solve the problem, not make it worse.