Facebook & WhatsApp! Stop paying lip service. Here's how you should 'lynch' fake news
By its very nature, fake news can't be controlled with a top-down approach that Facebook and WhatsApp are trying to adopt. Instead, it needs a bottoms-up approach. It needs to be crowd-tackled.
Under fire in their large markets-India, Asia and Africa-for inadvertently propagating fake news, Whatsapp and its owner Facebook have made conciliatory noises about their desire to stop the menace. So far, they seem to be paying lip service rather than taking the threat seriously.
Facebook founder Mark Zuckerberg has said: "There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down". Facebook plans to review misleading or inaccurate content that can cause physical harm. This will be done in association with local law enforcement and intelligence agencies.
Meanwhile, WhatsApp, which is under attack for causing 39 incidents of lynching deaths in India in the past two months due to mass misinformation on its platform, has come up with its own measures. It is testing limiting the forwards on the platform. WhatsApp plans the lowest limit of just 5 chats at once for Indian users since Indians are the biggest users of the forwarding feature, sending videos, photos and messages.
But both are futile exercises and will not deliver the results they are supposed to. Consider the 5 simultaneous chats limit: Each chat group on WhatsApp can have a maximum of 256 members. Hate-mongers can create multiple such groups. One message to 5 chat groups of 256 each reaches 1280 people in one shot. If those 1280 receivers also send to 5 chat groups of 256 each, hypothetically, in just 2 forwards a hate message could reach over a 1.6 million and will then continue to multiply exponentially.
That makes you wonder the intent behind the trial run! Why suggest something that is bound to fail at the start? WhatsApp issued a blog saying: "We built WhatsApp as a private messaging app - a simple, secure and reliable way to communicate with friends and family. And as we've added new features, we've been careful to try and keep that feeling of intimacy which people say they love. We believe that these changes - which we'll continue to evaluate - will help keep WhatsApp the way it was designed to be: a private messaging app." Essentially, that implies that changes will be minimal.
Zuckerberg's claim to identify messages that can do physical harm is also unbelievable. By the time such a message is identified-if at all-the harm may have already been done. So are these statements only meant to fend off charges? It appears so. Especially, since the government on Thursday sent another notice to WhatsApp to identify more effective methods to deal with the problem.
But there's a lot that both WhatsApp and Facebook can both do-but are not doing. Instead of creating artificial barriers to spread of fake news which would inconvenience millions, it's better to deploy technology smartly.
These platforms need to give the users the option to identify and 'flag' a message that is potentially dangerous.
The platform's AI engine should watch out for such flagging-grading it to 5 per cent of the users participating, 10 per cent of users participating in the message and so on... and alerting fact-checkers.
Since such messages spread like wildfire, the AI engine can slow down or defer the spread of such messages (by between 15 mins to 2 hours) the moment such flagging by users reaches a threshold limit (let's assume 10 per cent of users involved) - until humans can verify the content.
But what is most critical is to hotlist the participants of such messages. Observing such participants over a period of time will help the platforms grade them from GREEN (least dangerous) to RED (most dangerous) depending on the number of such messages the person has been part of.
Their grading should be visible not just to that individual but to all users on the platform.
Facebook and WhatsApp can then decide when to warn such users, when to red-card such users and when to block such users.
But to begin with, it needs to identify all those involved in the 29 lynching death incidents in the past 2 months and red-card them or block them. This will disrupt services of some so-called 'innocent forwards' but it's a small price to pay for being party to a heinous crime.
Fake news, by its very nature can't be controlled with a top-down approach that Facebook and WhatsApp are trying to adopt. Instead, it needs a bottoms-up approach. It needs to be crowd-tackled. Yes. Ironic as it may seem, fake news needs to be 'lynched'.