When I was younger and far meaner than today, I recall playing a trick of mistruths on a younger brother of a friend. Peter Jackson’s King Kong was on the television and exploiting this poor boy’s fear of this gargantuan ape, his brother and I concocted a fiction that the iconic movie monster was in fact real. But regaling our frightened target with stories of Kong’s exploits could only get us so far, we needed proof. This was in the early 2000s, before social media, before even YouTube; if we were to show evidence it had to be televisual, or worse, from a book. However, there was one resource that could be exploited: Wikipedia. As often lambasted by teachers at the time, Wikipedia was infamous for its policy in allowing anybody to edit its contents, it was perfect. Rather than supporting Wikipedia’s noble goal to democratise knowledge, my friend and I instead took advantage of its utopian view for our darker purpose. It began rather crudely: ‘King Kong was a real monkey who’, before consolidating the stories we had been telling all evening. Lo and behold, it worked and not only succeeded in terrifying our victim (sorry about that) but managed to remain unedited on the King Kong entry for up to four days. We relished in imagining the many readers we had potentially duped and the genius of our prank. Almost two decades later and once childish japes have now become an ideological weapon armed with advanced technology, complex algorithmic systems and an ever-growing reliance on online information. Disinformation, while a historic problem, has become ever more precarious from the multitude of online sources that spread it, mostly through social media. With internet companies hesitant to levy serious sanctions on disreputable sources, what can be done to ensure what we read, watch and hear is really true?
It is first important to separate the difference between misinformation and disinformation. Misinformation is the spread of incorrect information regardless of intent. Disinformation is spreading incorrect information with deliberate intent to mislead through biased, manipulated narratives that can be used for propaganda or gain. The gain can be financial, ideological, or simply trying to persuade a doubtful younger boy that your lies about King Kong are true. Over the pandemic, disinformation campaigns surrounding the COVID-19 vaccination have run rampant across the internet. According to a report by the Center for Countering Digital Hate (CCDH) most of the world’s COVID-19 vaccine disinformation stems from 12 individuals, otherwise known as the ‘disinformation dozen’. These individuals have ranged from health gurus, pseudo-science medical professionals, and even the nephew of former president John F Kennedy, Robert F Kennedy Jr.

From propagating supposed studies proving the dangers of the vaccination, to suggesting the programme itself is an exercise in government tyranny, the spread of disinformation has had an adverse effect on vaccine roll-out and trust in medical professionals.
One of the most famous sceptics of the vaccination programme is the MMA fighter, stand-up comedian and podcaster Joe Rogan. If you are of a certain demographic, you have probably heard of Rogan whose podcast The Joe Rogan Experience is the world’s most popular, with an average listenership of 11m per episode. In 2021 the music streaming platform Spotify purchased the exclusive publishing rights of The Joe Rogan Experience for a reportedly hefty $100m. Since then, the company has come under scrutiny for Rogan’s public views on the vaccine such as the declaration that he would not advise younger people to take it contrary to mainstream medical advice. This caused a flurry of complaints and condemnation, including an open letter signed by 270 doctors, physicians and science educators demanding Spotify to stop the spread of Rogan’s assertions. After a review of Rogan’s claims, Spotify decided that the episode had not been intrinsically anti-vax and took no further action, prompting artists such as grizzly Canadian Neil Young and grizzled Canadian Joni Mitchell to remove their music from the platform. However, Rogan’s views on the vaccine have been inconsistent, while asserting that he is not anti-vax and has encouraged the vulnerable to take it, on his social media and podcast he has referred to widely disproven and incorrect studies pushed by known anti-vax and right-wing groups. This isn’t the first time that Spotify has encountered issues with anti-vax content on their platform, opting to remove the conspiracy theorist Pete Davies’ podcast from their catalogue as well as a controversial song by Stone Roses frontman Ian Brown. Unlike other social media companies such as Facebook and Twitter that tag misinformation, Spotify currently has no such measure in place. While, for me, the true causality of this debacle is not being able to listen to Joni Mitchell’s Don Juan’s Reckless Daughter with that smooth Jaco Pastorius bass and hypnotic lyricism, the wider implications of Spotify’s actions are far worse. Rather than making a concerted effort to tackle disinformation, Spotify wipes their hands of the matter by removing content rather than confronting it. This is a similar tactic used by other social media companies that, in an attempt to remain neutral, often allow for disinformation and racist hatred to spread before they have time to remove them.

But social media companies are never truly neutral and despite their size, do not have the same publishing rights as a newspaper. In not having a defined stance, legal incentive and action for the spread of disinformation, these companies pursue profit through clicks rather than the responsibility of truth. Most major publishers and trusted sources use social media platforms to share their content. But what are often considered to be trusted sources are often digital output of newspapers owned by the 1% who want to ensure their content takes prevalence. For instance, in Australia, media tycoon Rupert Murdoch lobbied the government to help pass a law that allowed news media to charge a fee to social media companies, such as Facebook, for displaying their content. The news media bargaining code was enacted in 2021 and mandates that tech companies must agree to pay to publish content or risk arbitration over publishing rights. Facebook initially responded by banning all Australian news media from the site before re-negotiating. Such a law is also being considered in the UK. While this is to ensure that news media whose advertising revenue has been engulfed by online companies, get a fair share of the pie there are potentially other issues that arise. One being the question of who tech companies would want to make deals with, Murdoch’s News Corp is a multi-billion dollar conglomerate with a large readership and strong legal framework for financial negotiations. Other trusted yet smaller outlets may not have the same bargaining power and become unable to strike a fair deal. This potentially leaves social media feeds without differing opinions and the potential for inaccurate, biased information, and with the likes of Murdoch and some of his editors being firm climate sceptics; this could be quite worrying indeed. On the other hand, what if companies refuse to make any deal at all? With social media being a primary source where people get their news, what happens when trusted and legitimate sources are no longer present? What could be left is a handful of rogue publications.
The current system of the newspaper industry’s optional self-regulating body IPSO, and tech companies dealing with their own moderation cannot truly safeguard against financial bias or disinformation. Policing social media is difficult and many argue that having a rigorous regulation system would only be used to strangle free-speech. Living in a liberal democracy should allow for free access to the internet and information without governmental interference. This is in comparison with nations such as China where the internet is heavily censored and in Russia where, in the wake of the recent Ukraine crisis has banned Facebook, Twitter and Instagram in an attempt to block outside influence. But social media companies need to understand their influence and take more of a hard-line on disinformation in times of crisis so users are able to form a universal understanding of the truth. Ultimately, the fight against disinformation does not only come from scrutiny of tech companies, newspapers, and governments but an active user base. Such as that administrator of Wikipedia, who upon seeing the line ‘King Kong was a real monkey who…’ knew to remain ardent in the company’s mission to keep information accurate and ensure truth was restored. But this is becoming difficult in an ever-polarising society where people want to defend their right to say what they feel is right and the right to being right all the time. Facts can be uncomfortable and in troubling times, people will prefer information that confirms their feelings and beliefs over that which challenges them. However, while it is often lamented that there is little time for such in-depth research, one has to wonder, if there is time to binge-watch television, follow day-long streams and endlessly doom scroll, surely there is time for the truth.
Laurence Smither is a content writer for Networthpick and screenwriter.