top of page
Crowd with Masks
NC Image from Wix Illustrations

A Tale of Two Pandemics

Two pandemics? Haven't our plates been full enough with just one? Well, apparently not.


If you're wondering what the other pandemic is, it may be something that you're already familiar with: misinformation. But how bad can that be, compared to a real pandemic like the current COVID-19 one?

The coronavirus pandemic may have caused sudden and widespread devastation over a few weeks and months, but misinformation is something that has festered over a much longer period of time. Now, at this time of crisis, it seems to be peaking in strength, almost opportunistically. While the misinformation pandemic hasn't left the kind of illness and death that the coronavirus has in its wake, it has had a role to play in causing some deaths. Yes, you read that right. Fake news can sometimes have life-threatening consequences. If you're having a hard time believing that, consider what happened in the Indian state of Maharashtra recently.

On April 18th, 2020, amidst the nationwide lockdown to contain the coronavirus pandemic, a horrific incident took place in a village in Maharashtra. The village is situated 93 miles away from Mumbai— one of India’s largest cities and the city that reported one of the worst outbreaks of COVID-19 in India. Three men (two religious teachers and their driver) were traveling by road to the neighboring state of Gujarat to attend a funeral. Due to some blockades that were put in place for the lockdown, they had to deviate from the usual route by taking a small detour through a village. On that fateful day, it was that one turn that ended up costing them their lives. As the car was passing through the village, a group of people from the village gathered around the car, pulled the occupants out and began to flog them in what appeared to be an uncontrollable, mob rage. As the chaos ensued, local policemen tried to intervene, but the scantily equipped officers were no match for the angry mob of over a hundred people. As the commotion died down after a few hours, authorities were able to transport the 3 grievously injured men to a hospital, where they succumbed to their injuries. The chilling incident became the newest addition to India’s list of WhatsApp-related lynchings. Believe it or not, there’s even a Wikipedia page titled “Indian WhatsApp lynchings.”

After some investigation into the incident, it emerged that people in the village suspected the 3 men to be child abductors and organ harvesters, based on a rumor message that some of the villagers saw on Whatsapp-- the most used messaging platform in India. The violent response stemmed from the urge to protect their village from outsiders that they believed to be malevolent. The story made national headlines, and soon, social media platforms were abuzz with denunciations, theories, and even insinuations that suggested a communal angle. Newsrooms held debates that quickly turned into screaming matches which often skirted the real issue at hand. Political figures on opposing sides joined the fray, adding fuel to the fire. Soon, a random and unpredictable instance of extreme mob violence fueled by misinformation turned into a political football.  


Now, I would have to make clear that this piece isn't about who is to blame and who isn't, its more precisely about what is to blame. It is also meant to examine the nature of information flow on modern digital platforms, and the possible consequences that arise when these platforms are used by a populace that isn't fully aware of their limitations.


As a developing nation plagued by severe deficiencies in education, administration, and law enforcement, India’s incidents are, of course, the more extreme examples of the perils of misinformation on digital platforms. Misinformation is alive and well all over the world and is now frequently rearing its ugly head during the global pandemic, further complicating the task of governments in managing the crisis. In the early days of the pandemic, a survey of over 8000 adults in the U.S revealed some unsettling results. About half of the respondents indicated that they had seen at least some misinformation about the pandemic, particularly in relation to the risks posed by the virus. In Italy, videos of military tanks patrolling the streets surfaced on multiple social media platforms, claiming that the tanks had been deployed in response to riots that had broken out in prisons in the area. Although there were genuine reports of riots breaking out in some parts of the country, the deployment of the tanks was found to have no connection with the riots. After a slew of similar fake news reports in different  parts of Europe, the European Commission set up a joint EU page to bust myths on the COVID-19 pandemic and advocate reliable sources of information. Today, after 4 months since the outbreak, the world continues to grapple with two pandemics.
So the question is, what is it that makes misinformation so rampant, even in developed countries?


A part of the answer lies in the very nature of digital mediums today, and the speed at which messages can disseminate to thousands of people in a matter of minutes. Digital platforms like WhatsApp and Facebook are designed to make information-sharing easy and quick. Added to that, the free-to-use model of both platforms makes them widely popular. Interestingly, the term “viral” is also used to describe content that is rapidly and widely shared on digital platforms, as its pattern of transmission is akin to that of an infectious virus. Worryingly, the key difference in the case of viral content on digital platforms is that the transmission is voluntary. By definition, viral misinformation is willingly shared by a large number of people. That raises the question of what causes people to willingly share false information, and how fallacies sometimes drown out reliable facts and data. Since false information on digital platforms is a relatively recent phenomenon, the extent of research and literature on the issue is also fairly limited. However, some studies have revealed findings that are noteworthy. 

Mobile Phone
NC Image from Wix Images


For instance, in 2019, MIT and the University of Regina conducted a study with over 8000 online respondents on the spread of online misinformation. In the survey, respondents reported being distracted from considering the accuracy of content by other motives such as securing more likes, followers, or achieving higher levels of engagement for the content they share. Some fairly recent examples of posts that went viral on social media seem to be consistent with these findings. There were viral posts on Twitter and Facebook with pictures of swans and dolphins that were apparently sighted in the canals of Venice after human activity had minimized since the COVID-19 outbreak. Given the circumstances and the timing, the posts could have come across as easily believable—with human activity and pollution down to a minimum during lockdowns, nature seemed to finally get its free rein.  Upon closer examination, however, the information in these posts was found to be false. It turns out that swans are regularly spotted in some canals in Venice, and it was also found that the pictures of the dolphins were taken elsewhere, at a port in Sardinia. Though the photos posted on Facebook and Twitter and their descriptions seemed to fit the larger story, the facts were simply untrue. One of the posts was traced to a resident of Delhi, who garnered over a million likes for her post on Twitter. Even when she was fact-checked on it, she was reportedly reluctant to take down the post, citing that the kind of attention that her post gained was unprecedented, making it far too valuable to unpublish.  

You might observe how this is consistent with the finding of the MIT-Regina study— motives such as garnering likes or popularity can distract users from evaluating the accuracy of the content, or even ignore the importance of accuracy. In the internet realm, going “viral” has very different connotations. "Virality" can in fact, get people their “15 minutes of fame”, and give them unprecedented levels of visibility and overnight popularity. 

Halftone Crowd
NC Image from Wix Images


The study of viral internet trends is a relatively nascent field that has been gaining more attention over the last decade or so. Among the prominent researchers in the field is Dr. Jonah Berger, a professor of Marketing at the Wharton School of Business. A paper that he co-wrote detailed the findings of a study that revealed some very telling conclusions about viral content on modern digital media platforms. Content that is likely to elicit high-arousal emotions (which could be both positive and negative) such as awe, anxiety, and anger, is more likely to be shared than neutral content or content that evokes low-arousal emotions such as sadness. This can explain why videos and news items with “shocking”, “inspiring” or “this will change your life” tend to be more popular, and also keep showing up on content feeds. Often, content garners attention on plain shock value. One such example is a graphic video of a person jumping off a high-rise building that was circulating on WhatsApp in March. The description was equally unsettling—apparently, a man had committed suicide after learning that he had contracted the coronavirus. After a fact-checking organization examined the video, it was identified to be over 4 years old, and therefore had no possible connection with the Wuhan coronavirus which began in 2019. Regardless of the patently false description, the video was widely circulated, particularly on WhatsApp, with many different versions of descriptions, much like a Chinese whisper (no pun intended). But why?

Well, a possible explanation is an inference from Dr. Jonah Berger’s study—content is more likely to be shared if it elicits high-arousal emotions, and is therefore also more likely to go “viral.” The implications of the findings from both studies mentioned above are causes for concern. Both seem to point to a rather unsettling conclusion— when it comes to sharing online content—accuracy doesn’t appear to be a top priority. Improved content regulation and moderation on digital platforms certainly seem to be the need of the hour. Much like the COVID-19 pandemic revealed cracks and faults in both healthcare systems and public awareness all over the world, the prevalence of fake news points at similar deficits in content regulation on digital platforms, as well as a dearth of public awareness. Content regulation is also likely to be a greater challenge on digital platforms due to the added complexity that national borders do not apply to digital platforms and the sheer number of users and volume of content that is shared on an hourly basis every day. Digital platforms and messaging services have already begun initiatives to curb the spread of fake news, after continuous pressure from governing authorities. Recently, WhatsApp limited the number of times a message could be forwarded if it was identified as a message that was already forwarded more than 5 times. Facebook introduced a feature whereby users could be notified if they interacted with any posts containing false information on the coronavirus, and directed to a WHO “myth-busters” page. 

While introducing such features is good for a start, the current global crisis has highlighted the need for a more comprehensive, global framework on misinformation and its spread on digital platforms. Besides that, public awareness campaigns and the incorporation of the concept in mainstream education will play an important role in achieving long-term goals of overcoming the misinformation pandemic, which is likely to last a lot longer than the COVID-19 one. 

bottom of page