Left unchecked, the technique, which weaponizes emotional data for political gain, could erode the foundations of a fair and informed society.

Aram Sinnreich, Chair of Communication Studies at American University • Jesse Gilbert, former founding Chair of the Media Technology department at Woodbury University.

One of the foundational concepts in modern democracies is what’s usually referred to as the marketplace of ideas, a term coined by political philosopher John Stuart Mill in 1859, though its roots stretch back at least another two centuries. The basic idea is simple: In a democratic society, everyone should share their ideas in the public sphere, and then, through reasoned debate, the people of a country may decide which ideas are best and how to put them into action, such as by passing new laws. This premise is a large part of the reason that constitutional democracies are built around freedom of speech and a free press — principles enshrined, for instance, in the First Amendment to the U.S. Constitution.

Like so many other political ideals, the marketplace of ideas has been more challenging in practice than in theory. For one thing, there has never been a public sphere that was actually representative of its general populace. Enfranchisement for women and racial minorities in the United States took centuries to codify, and these citizens are still disproportionately excluded from participating in elections by a variety of political mechanisms. Media ownership and employment also skews disproportionately male and white, meaning that the voices of women and people of color are less likely to be heard. And, even for people who overcome the many obstacles to entering the public sphere, that doesn’t guarantee equal participation; as a quick scroll through your social media feed may remind you, not all voices are valued equally.

Above and beyond the challenges of entrenched racism and sexism, the marketplace of ideas has another major problem: Most political speech isn’t exactly what you’d call reasoned debate. There’s nothing new about this observation; 2,400 years ago, the Greek philosopher Aristotle argued that logos (reasoned argumentation) is only one element of political rhetoric, matched in importance by ethos (trustworthiness) and pathos (emotional resonance). But in the 21st century, thanks to the secret life of data, pathos has become datafied, and therefore weaponized, at a hitherto unimaginable scale. And this doesn’t leave us much room for logos, spelling even more trouble for democracy.

An excellent — and alarming — example of the weaponization of emotional data is a relatively new technique called neurotargeting. You may have heard this term in connection with the firm Cambridge Analytica (CA), which briefly dominated headlines in 2018 after its role in the 2016 U.S. presidential election and the UK’s Brexit vote came to light. To better understand neurotargeting and its ongoing threats to democracy, we spoke with one of the foremost experts on the subject: Emma Briant, a journalism professor at Monash University and a leading scholar of propaganda studies.

Modern neurotargeting techniques trace back to U.S. intelligence experiments examining brains exposed to both terrorist propaganda and American counterpropaganda.

Neurotargeting, in its simplest form, is the strategic use of large datasets to craft and deliver a message intended to sideline the recipient’s focus on logos and ethos and appeal directly to the pathos at their emotional core. Neurotargeting is prized by political campaigns, marketers, and others in the business of persuasion because they understand, from centuries of experience, that provoking strong emotional responses is one of the most reliable ways to get people to change their behavior. As Briant explained, modern neurotargeting techniques can be traced back to experiments undertaken by U.S. intelligence agencies in the early years of the 21st century that used functional magnetic resonance imaging (fMRI) machines to examine the brains of subjects as they watched both terrorist propaganda and American counterpropaganda. One of the commercial contractors working on these government experiments was Strategic Communication Laboratories, or the SCL Group, the parent company of CA.

A decade later, building on these insights, CA was the leader in a burgeoning field of political campaign consultancies that used neurotargeting to identify emotionally vulnerable voters in democracies around the globe and influence their political participation through specially crafted messaging. While the company was specifically aligned with right-wing political movements in the United States and the United Kingdom, it had a more mercenary approach elsewhere, selling its services to the highest bidder seeking to win an election. Its efforts to help Trump win the 2016 U.S. presidential election offer an illuminating glimpse into how this process worked.

As Briant has documented, one of the major sources of data used to help the Trump campaign came from a “personality test” fielded via Facebook by a Cambridge University professor working on behalf of CA, who ostensibly collected the responses for scholarly research purposes only. CA took advantage of Facebook’s lax protections of consumer data and ended up harvesting information from not only the hundreds of thousands of people who opted into the survey, but also an additional 87 million of their connections on the platform, without the knowledge or consent of those affected. At the same time, CA partnered with a company called Gloo to build and market an app that purported to help churches maintain ongoing relationships with their congregants, including by offering online counseling services. According to Briant’s research, this app was also exploited by CA to collect data about congregants’ emotional states for “political campaigns for political purposes.” In other words, the company relied heavily on unethical and deceptive tactics to collect much of its core data.

Once CA had compiled data related to the emotional states of countless millions of Americans, it subjected those data to analysis using a psychological model called OCEAN — an acronym in which the N stands for neuroticism. As Briant explained, “If you want to target people with conspiracy theories, and you want to suppress the vote, to build apathy or potentially drive people to violence, then knowing whether they are neurotic or not may well be useful to you.”

CA then used its data-sharing relationship with right-wing disinformation site Breitbart and developed partnerships with other media outlets in order to experiment with various fear-inducing political messages targeted at people with established neurotic personalities — all, as Briant detailed, to advance support for Trump. Toward this end, CA made use of a well-known marketing tool called A/B testing, a technique that compares the success rate of different pilot versions of a message to see which is more measurably persuasive.

Armed with these carefully tailored ads and a master list of neurotic voters in the United States, CA then set out to change voters’ behaviors depending on their political beliefs — getting them to the polls, inviting them to live political events and protests, convincing them not to vote, or encouraging them to share similar messages with their networks. As Briant explained, not only did CA disseminate these inflammatory and misleading messages to the original survey participants on Facebook (and millions of “lookalike” Facebook users, based on data from the company’s custom advertising platform), it also targeted these voters by “coordinating a campaign across media” including digital television and radio ads, and even by enlisting social media influencers to amplify the messaging calculated to instill fear in neurotic listeners. From the point of view of millions of targeted voters, their entire media spheres would have been inundated with overlapping and seemingly well-corroborated disinformation confirming their worst paranoid suspicions about evil plots that only a Trump victory could eradicate.

Although CA officially shut its doors in 2018 following the public scandals about its unethical use of Facebook data, parent company SCL and neurotargeting are still thriving. As Briant told us, “Cambridge Analytica isn’t gone; it’s just fractured, and [broken into] new companies. And, you know, people continue. What happens is, just because these people have been exposed, it then becomes harder to see what they’re doing.” If anything, she told us, former CA employees and other, similar companies have expanded their operations in the years since 2018, to the point where “our entire information world” has become “the battlefield.”

Unfortunately, Briant told us, regulators and democracy watchdogs don’t seem to have learned their lesson from the CA scandal. “All the focus is about the Russians who are going to ‘get us,’” she said, referring to one of the principal state sponsors of pro-Trump disinformation, but “nobody’s really looking at these firms and the experiments that they’re doing, and how that then interacts with the platforms” with which we share our personal data daily.

Unless someone does start keeping track and cracking down, Briant warned, the CA scandal will come to seem like merely the precursor to a wave of data abuse that threatens to destroy the foundations of democratic society. In particular, she sees a dangerous trend of both information warfare and military action being delegated to unaccountable, black-box algorithms, and “you no longer have human control in the process of war.” Just as there is currently no equivalent to the Geneva Conventions for the use of AI in international conflict, it will be challenging to hold algorithms accountable for their actions via international tribunals like the International Court of Justice or the International Criminal Court in The Hague.

Even researching and reporting on algorithm-driven campaigns and conflicts will become nearly impossible.

Even researching and reporting on algorithm-driven campaigns and conflicts — a vital function of scholarship and journalism — will become nearly impossible, according to Briant. “How do you report on a campaign that you cannot see, that nobody has controlled, and nobody’s making the decisions about, and you don’t have access to any of the platforms?” she asked. “What’s going to accompany that is a closing down of transparency … I think we’re at real risk of losing democracy itself as a result of this shift.”

Briant’s warning about the future of algorithmically automated warfare (both conventional and informational) is chilling and well-founded. Yet this is only one of many ways in which the secret life of data may further erode democratic norms and institutions. We can never be sure what the future holds, especially given the high degree of uncertainty associated with planetary crises like climate change. But there is compelling reason to believe that, in the near future, the acceleration of digital surveillance; the geometrically growing influence of AI, Machine Learning, and predictive algorithms; the lack of strong national and international regulation of data industries; and the significant political, military, and commercial competitive advantages associated with maximal exploitation of data will add up to a perfect storm that shakes democratic society to its foundations.

The most likely scenario, this year, is the melding of neurotargeting and generative AI. Imagine a relaunch of the Cambridge Analytica campaign from 2016, but featuring custom-generated, fear-inducing disinformation targeted to individual users or user groups in place of A/B tested messaging. It’s not merely a possibility; it’s almost certainly here, and its effects on the outcome of the U.S. presidential election won’t be fully understood until we’re well into the next presidential term.

Yet we can work together to prevent its most dire consequences, by taking care what kinds of social media posts we like and reshare, doing the extra work to check the provenance of the videos and images we’re fed, and holding wrongdoers publicly accountable when they’re caught seeding AI-generated disinformation. It’s not just a dirty trick, it’s an assault on the very foundations of democracy. If we’re going to successfully defend ourselves from this coordinated attack, we’ll need to reach across political and social divides to work in our common interest, and each of us will need to do our part.

  • C126@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    6 months ago

    The marketplace of ideas is a flawed concept and always has been. Economists with years of specialized training and decades of experience can’t agree on the best course of actions, how is it reasonable to expect Joe the plumber to make an informed choice by watching an hour long debate?

  • hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    7
    ·
    edit-2
    6 months ago

    So true, things were much better when we had three channels on the TV.

    edit: In case it wasn’t clear, I was being sarcastic. Individualized propaganda isn’t any more dangerous to democracy than completely-controlled broadcasts. In fact, the completely-controlled broadcasts are more dangerous since it’s easier to control the message. It’s not killing democracy because we never truly had it.