Rapid Response: A Holistic Approach to Countering Online Misinformation

Writing for Supply2Defence, Karl Swannie, Founder of Echosec Systems, examines how misleading narratives are emerging faster than detection algorithms and content regulation teams can handle, but we don’t have to accept online misinformation as a fact of life

In the wake of the coronavirus pandemic and numerous election campaigns, governments around the world have called on major tech companies to combat the spread of fake news online. Understanding trends in disinformation (which is intentionally spread) and misinformation (which is spread regardless of intent) is crucial for an informed response. This helps minimize potential damage to democracies and public health and safety.

But is this cooperation enough to stay ahead of the threat? And are mainstream online networks the only piece of the puzzle?

While many platforms—including Google, Facebook, and Twitter—have pledged to double down on fake news, misleading content is still reaching thousands before detection. And even though misinformation has more viewers on mainstream networks, less-regulated networks should also be considered in any counter-misinformation strategy.

A small part of the bigger picture

It’s been over a year since the UK Government’s Rapid Response Unit cracked down on coronavirus misinformation – so what does the state of misinformation look like now?

Misinformation is recognised by the Government as a potential national security threat, but there are currently no laws regulating it. This means the Rapid Response Unit depends largely on the cooperation of technology companies like Facebook to curb the spread of misleading content.

But the reality is that even with big tech on board – it’s very hard to manage online misinformation with human intervention and detection algorithms. According to new data from Crowdtangle, a Facebook-owned content monitoring tool, it’s still possible for false COVID-19 vaccine information to clear 12,000 interactions before takedown. This doesn’t factor in privately-shared content and user exposures without interaction.

The misinformation problem also goes beyond mainstream social media and news networks. More covert online sources, like alt-tech platforms, deep web forums, imageboards, and dark websites, also play a significant role. These online spaces are far less regulated than mainstream sites. They often host fringe groups who instigate misleading content surrounding conspiracies and radical worldviews—whether it’s in response to a pandemic, political climate, or other events.

Throughout the pandemic, discussions and marketplaces on these sources have circulated fake news, solicited fake COVID-19 vaccines, and offered misinformation-based scamming tools like phishing pages—which target citizens and governments alike.

Beyond the challenge of addressing mainstream networks, these sources are valuable to counter-misinformation teams who need to:

  • Understand and predict misleading narratives originating on more obscure social sites
  • Investigate and counter phishing attacks and other forms of fraud based on misinformation
  • Educate the public about emerging misinformation as early as possible

Political and social impacts still unfolding

According to 2020 research by King’s College London, the impacts of mis- and disinformation are still not entirely clear or measurable. But we do know that misinformation can influence public safety risks (look no further than the 2021 Capitol Hill insurrection or the public health impacts of false coronavirus information). Misinformation can also affect financial markets, prompt cyber compromise, sow social unrest, and potentially influence democratic processes.

How can governments stay ahead of online misinformation as tech giants struggle to keep up with its spread?

The Rapid Response Unit’s goal is to find misleading content, assess risk and response, and if necessary, create and target a more balanced counter-narrative. In this process, early detection is critical to minimizing or avoiding damage caused by misinformation.

Early detection is only possible when unit personnel have real-time, comprehensive access to online data sources where misinformation exists. This includes both mainstream networks where misinformation often gains the most traction and exposure—and covert sources where it may originate or be used to target the data of citizens, enterprises, and governments.

Searching for misinformation on the surface web is already an overwhelming task without adding hidden sources to the mix. The good news is that this process can be streamlined with the right tools.

Commercial open-source intelligence (OSINT) software helps analysts, data scientists, and other professionals gather the information to support misinformation use cases. These solutions help users aggregate, search, and prioritize public online content more efficiently than navigating sources manually. As misinformation evolves rapidly online, emerging OSINT tools are equipped with certain features to enable early detection and minimize analyst fatigue and overwhelm. These include:

  • Improved data coverage. Many OSINT tools focus only on one feed type – for example, social media sources vs. dark web sources – rather than combining different data source types in one tool. Consolidating a range of sources in one interface streamlines analyst workflows and ensures that relevant data is not overlooked. Tools should also include more obscure online sources and social platforms that aren’t typically offered through commercial solutions.
  • Entity pivoting. Another benefit of combining social media, deep, and dark web sources in one tool is the ability to map connections between entities, such as accounts, keywords, and other identifiers. This helps analysts track misinformation networks across the web to more accurately and assess their migration and reach and locate influencers.
  • Machine learning. AI isn’t sophisticated enough to definitively separate misinformation from legitimate content. But machine learning models can be trained to classify content types and streamline human analysis and triage for countering misinformation. Machine learning also helps with multilingual analysis, which is often required for misinformation initiatives where analysts aren’t fluent in the target language.
  • Real-time alerting features. Tools that alert users as soon as concerning content hits the web allows counter-misinformation teams to respond faster and stay ahead of its spread.

Keeping up with online misinformation is challenging enough for agile tech companies, let alone government agencies. Misleading narratives seem to emerge on a near-daily basis, often faster than detection algorithms or content regulation teams can keep up.

Misinformation legislation, public education, content warnings, and counter-misinformation technologies are all part of the solution. When it comes to the latter, teams like the Rapid Response Unit can benefit from software that enables early detection and data coverage for a range of surface, deep, and dark web networks.

The full impacts of mis- and disinformation will take time to unpack. But with the right OSINT solutions, governments are better equipped to understand and minimise harm to populations, companies, data, and other national security interests as the threat evolves online.

Win Defence Procurement Opportunities 

The MOD is spending with defence suppliers, which means now is the perfect time to invest you efforts into defence procurement.

By registering for Supply2Defence you will gain access to free procurement resources that will give you a deeper insight into defence procurement. Getting to know the world of defence procurement is the first step – finding relevant tender opportunities is the hard part. When you register for our Tender Alerts portal you will receive daily tailored defence tender alerts – this means your business will never miss an opportunity that is relevant to your business profile.

Start your search for defence contracts with Supply2Defence.