In an age of endless political chatter, keeping tabs on how public discourse shapes democracy is more challenging – and more urgent – than ever. But what if we could spot threats before they escalate?
Artificial Intelligence is often cast as a villain in the democratic narrative – a tool for spreading disinformation, creating deepfakes, and amplifying political polarisation.
However, new research from Monash Business School Professor Simon Angus offers a different perspective: AI as a potential guardian of democracy.
The paper – part of a series published by the Australian National University for the Department of Home Affairs Strengthening Democracy Taskforce – suggests advanced AI tools could help in understanding public narratives and issues that are important to democratic resilience at an unprecedented scale.
“Growing up in Australia, I consider our democracy to be incredibly precious, so when I was asked to turn my attention to this issue, I was very keen to help,” Prof Angus said.
“Rightly, the new wave of generative AI tools has received a lot of negative commentary, however, I felt it was important to highlight how NLP and AI technologies also carry huge potential for improving our understanding of democratic resilience.”
Why traditional methods fall short
Public narratives wield immense power, shaping individual beliefs and societal norms.
But the sheer volume and speed of modern discourse — from the 24/7 news cycle to social media — make it nearly impossible for traditional tools and tracking methods to keep up.
“There is a perception that threats to the health of our democracy are arising from a range of areas, including polarisation, mis- and disinformation, social media algorithms, and imported narratives that may undermine trust and confidence in our electoral and government institutions,” he said.
“The central idea of the study is that it’s now possible to track narratives that matter to democracies across large amounts of text, such as Parliamentary speeches, news, and social media, and to do this with much greater accuracy than ever before.”
‘The opportunity is huge’
As part of the research, Professor Angus and his team used a technique developed in Monash’s SoDa Labs called ‘paired completion’.
This method leverages AI tools to evaluate how closely a text aligns with specific narratives.
To demonstrate its power, Professor Angus and his team analysed more than 4000 climate change speeches from Australian prime ministers and opposition leaders over two decades, quantifying their alignment with climate change science or denialism.
“With human-based reading, labelling, and analysis, this would have taken hundreds of hours,” said Prof Angus.
“With our new AI methodology, it takes a few minutes. The opportunity is huge.”
The paper and paired completion method are currently under peer review.
Real-time monitoring of political speech
One of the study’s key recommendations is the creation of an Observatory of Democratic Narratives.
This AI-driven platform would monitor political rhetoric and public discourse in real time.
This would expose divisive or anti-democratic language moments after it is used, providing the government, journalists, researchers, and the public with timely insights into the health of our democracy.
The government should never be tempted to use AI tools for mass surveillance
“Imagine someone standing up in Parliament and bringing a divisive ‘us versus them’ framing to a topic, and then moments later, seeing the Observatory analyse this language, quantitatively placing it in context and allowing the wider public to learn from advanced quantitative tools,” he said.
“My hope is that this would bring accountability and strengthen our public understanding of democratic principles.”
Ethical challenges and the path ahead
Despite its promise, the use of AI in monitoring democracy isn’t without risk. Chief among these is ensuring that AI tools are used transparently and responsibly.
“The government should never be tempted to use AI tools for mass surveillance,” he said.
The team’s research has also highlighted potential biases in AI models.
“We can quantify these biases and choose tools and methods which largely mitigate these problems, but we should always be aware of them,” he said.
The team is actively seeking partnerships to help create an Observatory and exploring other applications of AI to analyse discourse at scale.
“The aim of this research, for now, is to advise the government on the tools that are out there, and are being created as we speak, that they can harness in their attempts to cultivate a resilient democracy,” he said.
“There is a huge appetite for tracking narratives across a range of topics, so we’re keen to keep pushing into this research area to develop and evaluate scientifically these tools to have the most helpful impact for social good.”
Read the full report: Tracking Public Narratives of Democratic Resilience at Scale