What to do about disinformation

0 0

In the digital age, the echoes of truth and falsehood reverberate with increasing intensity. Each click, share and retweet amplifies narratives, shaping perceptions and moulding realities. Yet amid this cacophony of voices, some pressing questions emerge. Whose narrative do we trust? And at what cost?

Social media platforms such as X, once celebrated as democratising forces, are under ever more scrutiny. The challenge isn’t just about rogue posts or unchecked algorithms. It reflects a deeper malaise, rooted in societal distrust, exacerbated by business models and perpetuated by reactive policies. The ramifications of disinformation spill on to the streets with tangible, often devastating real-world consequences.

These processes were already under way in 2014, when I founded the open-source investigative group Bellingcat. Having spent two years writing about the conflict in Syria, I understood that pooling knowledge and expertise online was a way not only to address disinformation directly, but to democratise the process of investigation itself.

Now, nearly a decade on, the problem is on a different scale. But there are solutions available — if we are prepared to put them into practice.

In charting a course, the first thing we need to do is to confront the unsettling transformations happening within platforms that have become primary news sources for many. X, formerly Twitter, stands as a prime example of this shift.

Before Elon Musk’s acquisition of the platform, its verification process was primarily designed to authenticate the identities of high-profile figures, including celebrities, politicians, journalists and other notable individuals. Applicants for a blue checkmark would provide evidence of their identity and public relevance, whereupon Twitter’s team would check that the accounts met certain criteria, such as being in the public interest and having a record of adherence to Twitter’s rules.  

Now, by contrast, the blue tick is conferred in return for a monthly fee, muddying the waters of authentic discourse. If the distinction between genuine authority and purchased prominence becomes ambiguous, discerning truth from noise becomes a Herculean task for the everyday user.

This transformation takes on an even graver aspect when one considers the platform’s algorithmic predispositions, which, driven by engagement metrics, can amplify sensationalist or controversial voices. 

The risk here is twofold. Immediately, there’s the danger of public opinion being moulded by skewed or even false narratives. But, more insidiously, as sensationalism consistently trumps sober, fact-based reporting, public trust in social media platforms as reliable news sources erodes. This escalating mistrust, exacerbated by the spread of disinformation about traditional news sources, threatens to undermine public confidence in the media as a whole. 

This has become apparent in the current Israel-Palestine conflict, during which accounts that had previously been banned from Twitter and were restored in Musk’s “free-speech” era have fuelled the spread of disinformation. One example is Jackson Hinkle, a US rightwing influencer who has churned out “anti-Zionist” content and false allegations against Israel and saw his audience on X grow from 400,000 followers to more than 2mn after the October 7 Hamas attacks. This content includes posting a picture of a woman in a bombed-out building with the caption “You CANNOT BREAK the Palestinian spirit”. In fact, it was a prize-winning image taken by the Iranian photographer Hassan Ghaedi in Homs, Syria, in 2016. 

The repurposing of images of carnage in Syria as anti-Israeli or anti-Palestinian disinformation, often by individuals who had previously denied the reliability of such images, is particularly grotesque. In the most egregious example, a meme used by the pro-Assad community to claim that volunteers for the White Helmets in opposition-controlled Syria were faking the rescue of children was repurposed to claim Palestinians were doing the same thing, as part of the “Pallywood” narrative used to deny the suffering of the Palestinian people.

Reflecting on new X features such as the “For you” tab that promises personalisation, the pitfalls become evident. While this claims to tailor content, the underlying mechanics often lean towards virality, sometimes at the expense of veracity. Monetisation mechanisms such as paid subscriptions for verified content further complicate the landscape. While they offer a potential revenue model for creators, they also commodify news, blurring the boundaries between genuine reporting and content created for profit. 

Most recently, Musk asked X users to vote on whether to allow notorious conspiracy theorist Alex Jones to have his X account restored. Last year, Jones was ordered to pay almost $1.5bn in damages to the families of victims of the Sandy Hook school shooting, which he claimed was a hoax. At the time, Musk wrote that he would not reinstate Jones: “My firstborn child died in my arms. I felt his last heartbeat. I have no mercy for anyone who would use the deaths of children for gain, politics or fame.”

This year, however, Musk proclaimed that “the people have spoken”, with 70 per cent of X users voting for Jones to return to the platform. Jones’s first act was to retweet a congratulatory tweet from the influencer Andrew Tate: “To show respect to Alex Jones for his triumphant return and to show respect to Elon being a hero — tell a globalist to get fucked today.” None of which does anything to restore our faith in X’s future reliability as a source of information.

In grappling with these shifts, a central concern emerges: if digital platforms such as X continue to prioritise commercial imperatives over authentic information dissemination, what becomes of our global discourse? The challenges aren’t merely about individual platform decisions but a broader ethos governing the digital information ecosystem. As we weigh commercial interests against the duty of providing accurate information, the path forward demands a renewed commitment to authenticity, transparency and trust.


As the digital realm’s challenges mount, calls for state-led intervention grow louder. Governments across the world, alarmed by the implications of unbridled platforms, are contemplating regulatory measures to curb the spread of disinformation. But while the intent might be noble, the journey towards state-mediated truth is rife with complexities.

In an age when X and its ilk have increasingly become the public’s primary news sources, the thought of state bodies arbitrating “truth” raises a multitude of concerns. For starters, the definition of truth is inherently subjective, varying across political, cultural and geographical boundaries. A narrative deemed factual by one nation’s standards might be labelled misinformation by another — for example, the perception of the Armenian genocide in Turkey versus other parts of the world, or Russia’s prosecution and persecution of those who criticise its conduct in Ukraine.

The potential for governmental over-reach is clear. While democratic nations might employ regulations with a genuine intent to combat falsehoods, the same tools could be weaponised by authoritarian regimes to suppress dissent, curtail freedoms and consolidate power. Russia, China, Iran and Venezuela could even use western states’ attempts at countering disinformation as a pretext to justify their own draconian censorship and control of the internet. In such contexts, the line between combating misinformation and controlling narratives becomes precariously thin. The risk? A digital space where genuine discourse is stifled under the guise of regulatory oversight.

Such state-led interventions, if not judiciously implemented, could inadvertently exacerbate the very problem they aim to solve. If people perceive these interventions as mere tools to control narratives rather than genuine efforts to combat disinformation, public trust could erode further.

The economic implications of stringent regulations also cannot be ignored. Major tech companies, the backbone of our digital information ecosystem, could face significant challenges adapting to a tightly regulated environment. Stricter rules could have an impact on their business models, potentially reducing profitability, stifling innovation, or even prompting exits from certain markets. 

The delicate balance between combating misinformation and ensuring digital freedoms remains elusive. Instead of viewing state intervention as the panacea, we must recognise it for what it is: one tool among many.


In Ukraine, a vital aspect of countering disinformation emerged through the rapid response of online communities skilled in open-source investigation. These groups acted with remarkable speed, often debunking false narratives within an hour of their publication. This immediate action was crucial; it prevented the disinformation from seeping into, and becoming a part of, the usual media ecosystems that would propagate it. 

This included the swift debunking of a supposed Ukrainian IED attack on civilians, where savvy internet users identified autopsy cuts on the purported victims inside a burnt-out car. In another example, footage posted by the pro-Russian separatist Donetsk People’s Republic alleging that Ukraine had attempted to create a chemical incident was debunked within minutes of it being posted online when Twitter users discovered metadata inside the file that revealed it had been created before the incident and included audio of explosions from a 13-year-old YouTube video. 

By the time that individuals or groups prone to sharing disinformation encountered these narratives, they had already been thoroughly discredited. This proactive approach of digital communities in Ukraine illustrates the significant impact that empowering the public with the skills to identify and refute falsehoods can have. 

Addressing the root causes of disinformation requires a grassroots approach. Education stands at the forefront of this strategy. The idea is simple yet transformative: integrate open-source investigation and critical thinking into the curriculum. Equip the youth with the skills to navigate the labyrinthine digital realm, to question, analyse and verify before accepting or sharing information.

I was deeply inspired by the work of The Student View, a digital media literacy charity that trains young people to be critical consumers and creators of media. In particular, its work with students in Bradford in northern England to investigate the level of high-speed police chases in their streets shows that equipping young people with investigative skills can help them understand and influence the world they live in. These small acts of empowerment inspired Bellingcat’s own work with The Student View to develop a school curriculum to help combat misinformation. 

Such initiatives hold promise for several reasons. First, they seek to address the roots of the issue — the societal and psychological factors that draw people to misinformation and conspiracy theories. By imparting skills and fostering a culture of inquiry, we can help to inoculate future generations against the allure of falsehoods.

Moreover, this approach recognises and harnesses the power of community. By connecting educational institutions with local media networks, we achieve a dual objective: we empower the younger generation, giving them a platform and a voice, while simultaneously rejuvenating local journalism. This synergy can foster a new era of investigative reporting, rooted in community concerns and grounded in evidence.

The potential of such a grassroots movement doesn’t stop at school gates. Envision a world where universities become hubs of open-source investigation, with national and international networks of students sharing methodologies, tools and insights. As these students move into their professional lives, they carry forward not just skills but a mindset — one that values evidence over hearsay and critical thinking over blind acceptance.

In combating the spread of digital disinformation, traditional media organisations can play a vital role by forming partnerships with university-level pop-up newsrooms and investigative collectives. Such collaborations would bring together the experience and resources of established media outlets with the innovative approaches and technological adeptness of university-led initiatives. 

Amnesty International’s Digital Verification Corps, made up of students from across the world collaborating to investigate human rights violations, demonstrates how impactful such an approach can be. Not only does it increase the capacity of Amnesty to do its own work, but it acts as a place where students and universities can learn how to apply skills and develop new methodologies.

In essence, the grassroots approach offers a vision of a world where communities are connected by shared values of authenticity and inquiry; where narratives are not just consumed but questioned, analysed and co-created. It’s a world in which digital platforms, instead of being mere conduits of information, become arenas for genuine discourse and learning.

To realise this vision, a collective effort is needed. Policymakers, educators, tech leaders and communities must come together in a concerted push towards an informed and engaged society. Investments in education, collaborations between media and academic institutions, and a renewed commitment to journalistic integrity are vital components of this journey.

As the digital horizon expands and the challenges of misinformation grow ever more complex, the grassroots approach offers a glimmer of hope. It reminds us that, at the heart of the digital age, lies a fundamental human desire: the quest for truth. And by tapping into this innate drive, by empowering individuals and communities, we can find our way towards a future where trust is restored.


If disinformation is left unchecked, on the other hand, then the future looks very bleak. Faith in traditional institutions — whether media, academia or governance — would further wane. Over time, scepticism morphs into cynicism, and every source, regardless of its credibility, is viewed with suspicion. Disinformation, by its nature, is divisive. 

Democracies, in particular, rely on an informed electorate to function optimally. A populace continuously exposed to disinformation is susceptible to manipulation. Political campaigns could then pivot from policy and vision to propaganda and sensationalism. The very essence of democratic processes — free and fair elections — could be jeopardised.

Education would face its own set of challenges. In a world where any information, regardless of its veracity, is readily accessible, the traditional educational paradigm could be upended. Historical revisionism, fuelled by falsehoods, could reshape collective memories. How does one teach critical thinking in an environment where facts are fluid? 

Perhaps the most damaging impact would be on individual mental wellbeing. When we are inundated with conflicting narratives, cognitive dissonance becomes a daily struggle. The constant barrage of information, with no reliable means to discern its authenticity, can lead to news fatigue, apathy and, in some cases, a complete disengagement from current affairs.

In painting this picture of the future, the intent is not to spread doom and gloom but to underscore the urgency of the situation. Disinformation, if unchecked, is not just a challenge of the present but a looming crisis for the future. The cost of inaction is one we cannot afford to pay.

Eliot Higgins is the founder of Bellingcat

Follow @FTWeekend on Instagram and X, and subscribe to our podcast Life and Art wherever you listen



Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy