Democracy in a Post-Truth Information Age
David Clarke, FRSA
Presentation to Clergy Consultation at St. George’s House, Windsor Castle, 10th July 2018
Links to PDF with references and slides follow at the end of this post.
I would like to open by thanking the Programme Director Gary McKeone for inviting me to participate in this St. George’s House Clergy Consultation. I work in the field of information science and technology. Most of the people I meet at events are from business or technical disciplines, so it is great to have this opportunity to escape the confines of my ‘bubble,’ and to join in your discussions on how you speak about God in the face of contemporary world issues.
For the last year and a half I have been working with St. George’s House on one of the challenges facing contemporary society, and I look forward to learning your thoughts about it.
I was briefed that your preparatory reading for this segment of the programme was I Samuel 8:1-18 and Plato’s Republic Book VIII.
In the first reading we learned how Samuel appointed his sons judges over Israel, but due to their corruption the elders of Israel petitioned Samuel to appoint a king who would provide judgement like that of other nations. When Samuel prayed to God ‘The Lord said unto Samuel, Hearken unto the voice of the people in all they say unto thee…’ . In verses 11-18 we read how Samuel warns the elders that the monarchy they ask for would come with onerous conditions, including military conscription, taxation and the risk of tyranny. In this reading we encounter the idea of the will and the voice of the people, as well as governance by monarchy.
In Plato’s Republic Book VIII we learn about four other forms of governance: timocracy, oligarchy, democracy and tyranny. We may take note that in Plato’s usage timocracy denotes rule based on honour rather than the Aristotelian sense of rule based on wealth. Book VIII asserts that forms of governance have the tendency to degenerate: from timocracy to oligarchy and then to democracy, and finally from democracy to tyranny.
Contextually all four forms of governance discussed in Book VIII are contrasted with another form of governance espoused in the rest of The Republic, namely aristocracy. For Plato aristocracy meant rule by the best, by people who have attained virtue and excellence rather than inherited privilege. Book VIII is a robust criticism of democracy that describes the how political leaders will abandon sound policies in the pursuit of popularity, and how the public will focus on superficial images rather than substantive issues.
This talk is about Democracy in a Post Truth Information Age. We are going to review certain challenges facing society and specifically our democratic processes.
The Gospel of John 18:38 provides an apposite question with which to open our main theme.
Pilate therefore said unto him, Art thou a king then? Jesus answered, Thou sayest that I am a king. To this end was I born, and for this cause came I into the world, that I should bear witness unto the truth. Every one that is of the truth heareth my voice. Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.
Pilate asked the question “What is truth?” but did not wait upon an answer. Was his rhetoric sceptical, cynical or mocking?
If we are to explore “post-truth” then we should think through Pilate’s question. What do people generally mean when they talk about truth? What does truth specifically mean to each of you? How shall you speak about truth with others?
Rene Magritte’s painting The Human Condition is one of several works in which Magritte alludes to the allegory of the cave described in Book VII of Plato’s The Republic. The painting and the allegory explore ‘the limits of knowledge and the power of illusion’ .
We are going to explore a number of phenomena and issues which may be collected under the rubric of post-truth. These include fake news, electoral interference and the disinformation campaigns of foreign states, the denial of science, political and social polarisation, ideological extremism, filter bubbles, echo-chambers and the exploitation of private personal information for profit and political gain.
We are going to look at these issues through a technological as well as a sociological lens. How has the Information Age, or more specifically the Internet Age, changed our behaviours?
Let’s start by defining post-truth itself. In 2016 post-truth became the Oxford Dictionary’s Word of the Year. This in itself is telling testimony that society has issues with truth.
The OED definition is: ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.
So post-truth differentiates facts from opinions. Let’s define these two concepts. ‘A fact is a statement that is consistent with reality or can be proven with evidence. The usual test for a statement of fact is verifiability — that is, whether it can be demonstrated to correspond to experience.’
‘In general, an opinion is a judgment, viewpoint, or statement that is not conclusive. It may deal with subjective matters in which there is no conclusive finding.’ That I am standing before you giving this talk is an objectively verifiable fact. That this talk is interesting and worth listening to is an opinion that some of you may agree with and others may not.
Post-truth as a cultural phenomenon goes deeper than merely distinguishing opinion from fact. In his 2017 book Post-Truth Matthew D’Ancona describes it as ‘the infectious spread of pernicious relativism disguised as legitimate scepticism’. Addressing the Holberg Debate in Norway last December, Jonathan Heawood, CEO of the Press Regulatory body IMPRESS, discussed the distinction between cynicism and scepticism: ‘The new public sphere is in fact largely defined by a cynical attitude towards information. Sceptics ask questions, but they are prepared to listen to the answers.’
This distinction between scepticism and cynicism is important. Scepticism ‘is generally any questioning attitude or doubt towards knowledge or beliefs…’ whereas cynicism ‘is an attitude or state of mind characterized by a general distrust of others’ motives.’
D’Ancona and Heawood expose a troublesome characteristic of post-truth society. It isn’t just a culture in which some people believe that their opinions are better than those of others, it is a culture where people have stopped listening to the views and opinions of others.
Based on these definitions we can appreciate how the end result of a post-truth culture is social polarisation.
Is that your experience? I don’t want to believe that this characterises society today, but I am afraid there was plenty of evidence of it in the 2016 US election and EU referendum.
The EU Referendum and the US Presidential Election of 2016 shook me up. Not because I did or didn’t like the outcomes, upon which matter I shall try my hardest not to comment, they shook me up because public discourse was all too often characterised by incredible and unsubstantial claims, by intolerance and invective, and by the tactics of non-debate, which is to say dismissing the views of others, while not being prepared to defend one’s own views with reasoned argument and supporting evidence.
Society was polarising around highly charged emotional positions. An epidemic of ‘fake news’ dominated the news – but fake news alone couldn’t explain the extreme polarisation.
It became evident to me that fake news was merely the visible tip of a disinformation iceberg. Hidden beneath the waters were larger and more insidious issues concerning data privacy and the selective promotion and filtering of information by search and social media platforms.
On 10th January 2017 former US President Barack Obama gave a stirring farewell address in Chicago in which he described with chilling precision the character of a post-truth society:
‘For too many of us, it’s become safer to retreat into our own bubbles, whether in our neighborhoods or college campuses or places of worship or our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions.
The rise of naked partisanship, increasing economic and regional stratification, the splintering of our media into a channel for every taste… we become so secure in our bubbles that we accept only information, whether true or not, that fits our opinions, instead of basing our opinions on the evidence that’s out there.’
If society is becoming more opinionated and consequently more polarised, then how did we get here? What is driving the change? Is it societal or technological factors, or are both in play?
The Internet democratised the world of information. It levelled the playing field between citizens and institutions, and it gave a voice to many previously marginalised individuals and communities.
The Internet also changed where people go to find news and general information. There has been a steady migration away from libraries and traditional sources toward content delivered via web search engines and social media.
As web search engines and social media became the primary ‘go-to’ source for news and general information, the public has become less conscious of the origin and provenance of information, who creates it and whether it is trustworthy.
A paradox of the Information Age is that while we have access to many more and diverse sources of information, it is getting harder to distinguish fact from opinion and truth from lies.
In 2016 we experienced an epidemic of fake news. Fake news itself became one of the most talked about stories in the news. Some stories were as unbelievable as they were sensational, such as Hilary Clinton was running a paedophile ring from a New York Pizza Parlour. Others appeared to be believable even though they did not withstand the test of fact checking.
Who was behind all these stories? And how did they propagate so rapidly?
The ease with which online content can be generated and the rising popularity of social media as a go-to source for news has enabled many more people to produce both genuine news and information, and also fake news and false information. The often-sensational nature of fake news stories acts as ‘clickbait’ making it possible for the producers of fake news to easily monetise their content through advertising.
It turns out that fake news stories and disinformation are being produced by people with very different motives and methods. Following are some illustrative examples.
Monetised Fake News: A large amount of fake news is generated purely for a money motive. If you create popular stories that people want to click on, then you can monetise that content through click-through advertising on search engines and social media platforms.
In 2016 teenagers in Macedonia figured out that by creating sensational stories about the US election they could generate millions of clicks and tens of thousands of dollars of advertising.
These people had no political affiliation with particular US candidates or parties. When interviewed by CNN they said ‘it’s all about the money’. It just so happened the stories that were most effective at generating clicks were pro Trump.
Partisan Fake News: Commenting on the inauguration of the 45th US President the then Whitehouse Press Secretary Sean Spicer said the inauguration ceremony attracted “the largest audience ever to witness an inauguration, period, both in person and around the globe” .
As you can see from the visual evidence of these Reuters photographs of the crowds attending the 2017 inauguration compared with the 2009 inauguration, Spicer’s claim is false in the extreme.
One would have thought a public retraction would have been in order, but instead the President’s advisor Kellyanne Conway defended Spicer’s claim saying he just was providing ‘alternative facts’. This brazen disregard for facts and evidence characterises a second source of fake news, namely partisan political propaganda.
State-Actor Propaganda: A third source of fake news is state-actor propaganda – the efforts of one nation to influence the people of another nation. The US ODNI report Assessing Russian Activities and Intentions in Recent US Elections released in January 2017, as well as subsequent investigations by European intelligence agencies, have surfaced evidence of massive scale disinformation campaigns orchestrated by Russia to destabilise recent US and European elections.
Sometimes but not always stories originating from Russia favoured particular candidates or parties. What was common to all these disinformation campaigns was they aimed to fan the flames of polarisation, to destabilise by creating distrust and disharmony.
My last example concerns disinformation propagated by lobbyists and vested interests. Climate change is one of the gravest threats facing humankind, as well as all the other creatures inhabiting our planet. The scientific community have provided overwhelming evidence that carbon emissions from human activities are a significant causal factor, and more importantly, a factor that we have the power to control and mitigate.
Despite this knowledge, vested political and industrial interests continue to challenge efforts to mitigate carbon emissions, often by presenting pseudo-scientific arguments. For example, the website Friends of Science takes the position that humans are not responsible for global warming. The site is largely funded by the fossil fuel industry. Similar tactics were previously employed by the tobacco industry to spread doubt about the causal link between smoking and cancer.
Search engines and social media act as information filters. Filters have become essential as we sort and sift through vast amounts of information.
Search has also become personalised: filters now evaluate both what we look at and the person who is looking. A search for ‘movies’ won’t just retrieve the cinema listings, it will filter by what’s on at the closest cinema to where you are now. A search for products will remember and filter by all your past searches and purchases.
Over the past decade personalised search has spread from the world of online shopping to become an inherent design in all major search engines and social media websites.
What worked well for shopping becomes problematic when applied to how we search for factual information.
Following is an example, in 2010 two users searched Google at the same time for information about BP. One person was an investment banker, one wasn’t. The banker only saw business and company information in the results and was missing news about the Deep Water oil spill, which information was presented in the results seen by the other searcher. Same question, different answers. Personalised search algorithms take data they know about each person and use it to filter what information that person sees in response to any query.
Personalised search can distort our view of reality, creating a personal filter bubble that reinforces our existing beliefs while limiting our exposure to new ideas and contrary viewpoints.
The Filter Bubble promotes confirmation bias by shielding users from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.
Author Eli Pariser described this phenomenon in his 2011 book The Filter Bubble: ‘Left to their own devices, personalisation filters serve up a kind of autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar… In the filter bubble, there’s less room for the chance encounters that bring insight and learning… the collision of ideas from different disciplines and cultures’.
Pariser wrote these words six years before Obama’s farewell speech. How prescient he was.
In the last 10 years the entirety of the Internet has evolved to become an information filtering system. All search and social media platforms have been re-engineered to do it, and now personalisation underpins how the Internet is monetised.
The general public love personalisation because we are all drowning in data, so filters save us time. The search and social media companies love personalisation because they found that when they feed people content that is similar to what they have already indicated they like, then they spend more time on online and click on more adverts and ultimately spend more money.
Personalised search operates imperceptibly and without our conscious consent. Its influence is effectively subliminal: we have little awareness of and even less control over the data that is collected about us and how it is used to filter our access to information.
Conversely, advertisers can pay to use this data to influence the information we retrieve. Political campaign managers in the USA and the UK described micro-targeted messaging via social media as decisive in recent elections.
The problem of using personal data in order to deliver micro-targeted political messaging came to the world’s attention earlier this year with the Cambridge Analytica scandal.
What made this a scandal was the fact that personal psychographic data, which had been gathered by a seemingly innocent online ‘personality test’, was then sold without participant’s awareness or permission to a company who reused it for highly targeted political campaigning. The scandal brought down Cambridge Analytica and caused Facebook major embarrassment and a policy rethink.
The example on screen is taken from a video of a presentation by the former CEO of Cambridge Analytica back in 2016, before the data privacy scandal . Alexander Nix explains how his company acquired 4,000-5,000 points of data on every adult in the US and then used that data to deliver finely tuned messages, customised down to the level of individuals, in order to influence their opinions on political issues.
Propaganda is persuasively presented information, whether true or false, that is intended to influence people’s beliefs and behaviour. Propaganda has negative connotations, but it is nevertheless a manifestation of free speech and democratic discourse.
Propaganda requires careful unpacking. Many people associate propaganda with lies but this is not an inherent quality of the word.
Last year I spent a week at a cultural heritage conference hosted by the Vatican. On a balmy June evening meandering the back streets south of The Spanish Steps I stumbled upon this street sign:
Via di Propaganda , which led me to a building signed Collegium Urbanum de Propaganda Fide. The original meaning of the word propaganda was propagation – in this case the propagation of the faith.
Today the word’s common meaning is ‘Information, especially of a biased or misleading nature, used to promote a political cause or point of view’. Tarnished though the modern meaning of the word propaganda may be, I feel compelled to defend it.
Propaganda has a history as old as human communication. Thucydides account of Pericles’ Funeral Oration at the end of the first year of the Peloponnesian War, is a magnificent example of propaganda. It honoured the dead, it affirmed the virtues and identity of the Athenian people, and most importantly it exhorted the living to be resolute in fighting the enemy threatening Athens.
Churchill’s broadcasts in the Second World War have similar qualities. My mother lived through the blitz in Sheffield. She saw neighbouring houses in flames and neighbours killed, but what I remember most vividly is how she described being glued to the radio during Churchill’s broadcasts, and how Churchill’s reassuring resolve left her in no doubt that tyranny would fail and Britain would eventually prevail.
The essential character of propaganda is not to mislead it is simply to persuade. Its primary mechanisms are selection and emphasis. When propaganda is done with good intentions it can draw people together and inspire hope. When it is done with evil intentions is can divide people spreading doubt and fear.
As we think about ways to tackle the problems of misinformation and disinformation we must also take great care to protect freedom of speech and to avoid censorship.
Cyber-propaganda, however, presents new problems, such as ‘socialbots’, which impersonate humans and automatically generate thousands of artificial messages in support of or opposition to candidates and causes. Socialbots distort human democratic discourse and have influenced recent elections.
The computer industry is experiencing an Artificial Intelligence arms race between people creating bots and people detecting and eliminating bots. A link is provided here to a informative report and video by the Association for Computing Machinery.
Also of interest is The Oxford Internet Institute, which is running a Computational Propaganda Project that studies how ‘social media bots are used to manipulate public opinion by amplifying or repressing political content, disinformation, hate speech, and junk news’ .
Democracy relies on a well-informed public. In a post-truth society people’s access to objective and trustworthy information is compromised and civil society breaks down as people polarise within their bubbles of affiliation and confirmation bias.
While political advertising is a necessary part of democratic discourse, one of the means by which parties present themselves to voters, political advertising on the Internet is largely unregulated and through the mechanism of personalisation it has the potential to be far more influential than other media.
Spending on online political advertising is rising rapidly and on a path to overtake spending on broadcast media.
For Internet search and social media platforms political advertising is now a significant market sector.
For political campaign managers, microtargeted advertising was recently described as the decisive game-changer in the 2016 US election and EU referendum.
Thus far we have reviewed a small selection of how post-truth issues are manifesting themselves online. Artificial intelligent robots; search and social media platforms with thousands of data points on every individual; personal data being used to manipulate public opinion and influence voter behaviour. I don’t think it is over dramatic for me to say the enlightenment aspirations of the Information Age have produced side effects more akin to a dystopian nightmare.
Thirty-four years ago, I happened to be having lunch at The Royal Society in London when the then President, Sir Andrew Huxley, joined me. I was an enthusiastic but naïve twenty-year old. Given it was the portentous year of 1984 I decided that Orwellian dystopia might be a good conversation topic. We chatted, and Sir Andrew listened patiently and graciously before pausing to say, ‘of course when my brother wrote Brave New World he had a rather different idea of dystopia’. Ah… that Huxley. One can have too much of some good things, but humility is a virtue one can’t get enough of.
In Amusing Ourselves to Death the social critic Neil Postman contrasted the dystopian worlds of Orwell and Huxley as follows.
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one.
Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egotism.
Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance.
… In 1984, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.
Big Brother may not have come to pass in the form of an authoritarian surveillance state, yet over the past few years we have witnessed the complete destruction of privacy. Our smart-phones track all our communications and our every physical movement, through our searches and the content we read and post online, they track our thoughts, feelings and affiliations. A digital trace of every moment of every day.
Authoritarian regimes could never have dreamt of building a surveillance infrastructure on the scale or sophistication of that which has emerged through social media and smart phones.
Unscrupulous companies like Cambridge Analytica have found ways to harvest our private data and utilise this infrastructure for profit and political gain. A report on the BBC earlier this year stated that the US State Department now wants ‘to start collecting the social media history of everyone seeking a visa to enter the US’ . We still don’t know the full extent of how our private data is being used or will be used in the future.
While Orwell gave us Big Brother, Huxley gave us Soma. In Brave New World soma is the pleasure-giving addictive drug used for escapist relief from life’s little challenges and disappointments. Ex-Facebook President Sean Parker has described how Facebook is deliberately designed to be addictive, and that getting ‘Likes’ literally produces dopamine in the brain. From a Guardian article by Olivia Solon:
[Parker] ‘explained that when Facebook was being developed the objective was: “How do we consume as much of your time and conscious attention as possible?” It was this mindset that led to the creation of features such as the “like” button that would give users “a little dopamine hit” to encourage them to upload more content. “It’s a social-validation feedback loop … exploiting a vulnerability in human psychology.”
We have defined post-truth as a predilection for opinion over fact, and we have further characterised it as a disregard for debate and contrary opinion. Does this help us answer Pilate’s question What is truth? I don’t think it is as simple as saying truth equates to facts.
I have given numerous talks about post-truth in the UK, US and Asia. On every occasion at least someone in the audience will challenge the very premise that post-truth is an issue. Some say there are no arbiters of truth, that all facts are mutable and relative. Some have told me that to call any news ‘fake’ is elitist and hubristic.
I beg to differ. We may never be able categorically to prove that something is an absolute or immutable truth, but we can prove the falsity of claims when they simply do not correspond with the evidence.
‘The English word disinformation is a translation of the Russian dezinformatsiya, derived from the title of a KGB black propaganda department.’ But disinformation is not confined to the world of espionage. It is practiced by individuals, institutions, corporations, and governments alike. A shorter synonym with an Old English etymology is the word ‘lies’. Disinformation is simply lies – statements made with the intent to deceive. I find extreme scepticism and arguments that truth is elusive to be an unhelpful distraction if the issue at hand is a plethora of palpable lies.
Science aspires to truth and its method is constantly to review new data and to challenge its own assumptions. Post-truth culture and post-truth politics do not aspire to truth. Their methodology is to stay on message, never to self-challenge, and wherever possible to drown out opposing views.
The working definitions of truth and post-truth that I offer you are these: post-truth refers to ways of behaving that are closed to new or contrary information; whereas truth refers to ways of behaving that are open-minded, self-challenging and always seeking out new information.
The challenge facing society is to find new ways to nurture open-hearts and open-minds. To become tolerant, fact-loving, search savvy and truth seeking. To this end we need to hold our political leaders, businesses and institutions accountable to adopt truthfulness as a core value, to respect data privacy and in plain and simple terms to not deal in lies.
We opened this talk with Pilate’s question ‘What is truth?’ How might we end it?
Wistfully with TS Eliot: ‘Where is the wisdom we have lost in knowledge? / Where is the knowledge we have lost in information?’
Or perhaps in consideration of an increasingly polarised society we should invoke John Donne’s affirmation of the connectedness of the human condition: ‘No man is an island, entire of itself; / every man is a piece of the continent, / a part of the main.’
Possibly we should look to Edmund Burke to remind ourselves that this isn’t just someone else’s problem, we are all involved, and we can all make a difference: ‘The only thing necessary for the triumph of evil is for good men to do nothing.’
Despite painting such a dystopian vision of a post-truth information age, I remain an optimist. I believe in human progress and in the spirit of the enlightenment. There are things that are really broken in our information systems, there are vested interests who must be challenged and there are wrongs to be righted. But I would rather be working to fix a broken Internet than not have an Internet at all. The very same information infrastructure that is currently being exploited for profit and political manipulation can be used to support democratic discourse, to open minds and debate ideas.
In the context of post-truth culture and politics I would characterise the question of truth not as an epistemological problem, but as an ethical problem.
As an information scientist and software developer I tend to view post-truth issues and solutions from a technological perspective. As you continue with your discussions today you will bring entirely fresh perspectives to the debate. I look forward to hearing your thoughts and learning from your insights.
St. George’s House Consultation on Post-Truth
In January 2018 St. George’s House convened a consultation to discuss the challenge that post-truth poses to civil society and democracy. The consultation on Democracy in a Post-Truth Information Age drew together people from the World Wide Web Foundation, the Electoral Commission and the Electoral Reform Society, GCHQ and Counter Extremism, media companies and media regulators, information scientists, and a number of representatives of civil society organisations and think tanks including The RSA, Demos and Doteveryone. The background paper and reportage from the consultation are available online, with citation links included in the PDF version of this paper.