Does the microtargeting of political advertising on Facebook and Google favor the information filter bubble? What we learned from Russiagate and what we can do (maybe).

The filter bubble phenomenon on the Internet has been known and discussed for some time. Less clear, until now, was the role that online advertising can play at amplifying this phenomenon. In particular, the microtargeting features, offered by Facebook Ads, have been paid attention only in the last couple of years. And they are being discussed mostly because of the polemics that followed the last US presidential elections.

The Facebook advertising platform allows you to select extremely restricted audiences. In 2016, ProPublica demonstrated the possibility for the advertiser to target the audience of specific ethnic groups and even to cut out highly specific categories of users, like «jew haters».

Such a possibility can be exploited to address polarizing, divisive or even hateful messages. These messages can be very effective, as they target an audience that is already ideologically and psychologically predisposed, so reinforcing biases and prejudices that the recipients already have. In the same way with the platform of Facebook Ads you can convey incorrect, partial or false information. And similar results are obtained by applying microtargeting techniques to e-mail marketing.

Micromarketing and microtargeting

Mind you: the practice of microtargeting is not evil in itself. Nor should it be considered as an absolute novelty. After all, one could say, it is only the evolution of marketing techniques that have been in use for a long time. The segmentation of customers and the identification of increasingly narrower and homogeneous groups, to which to address diversified offers, is a well-known practice.

Actually, things have changed in recent years, with the explosion of the big data phenomenon. We have now more and more powerful techniques to analyze and segment the target, enhancing the enormous amounts of data collected by social networks and self-measuring devices on our smartphones. It is a race, however, where the winner is not who owns the best tools, but who controls the bigger amount of data.

The Cambridge Analytica case

Today microtargeting is discussed mostly in political advertising, especially in the United States. For example, microtargeting techniques were widely used during the presidential elections in 2016. The role of Cambridge Analytica in Donald Trump’s electoral campaign has been very much under fire. The British company, based in London, profiles a huge amount of demographic, psychometric and behavioral data to segment voters. Trump paid millions of dollars to Cambridge Analytica for his services.

It should also be noted that Stephen K. Bannon – chief strategist of Trump until last August and former head of Breitbart News [1], was part of the company’s board of directors. This circumstance may lead one to suspect that Cambridge Analytica’s activity was a piece of a more comprehensive information warfare operation, orchestrated by Russia to interfere in the US elections. Joshua Stowell talks to us about this scheme in an article about Global Security Review.

Cambridge Analytica combines big data with OCEAN, a specific psychometric model. This aims to define clusters of individuals based on five personality traits: openness (openness to new experiences), conscientiousness (tendency to perfectionism), extroversion (availability to social relationships), agreeableness (collaborative attitude) and neuroticism (tendency to get nervous and take on conflicting positions).

Throughout 2017, Trump’s entourage sought to minimize the role of Cambridge Analytica in the president’s election campaign, especially after the embarrassing rumors gathered by Daily Best about possible contacts between the companies and the Wikileaks founder Julian Assange. However the Guardian, in an article dated 26 October 2017, reminded the words spoken by Molly Schweickert, head of the digital sector of Cambridge Analytica, at a conference held in Germany in May 2017. On that occasion Schweickert claimed the weight of his company’s models not only on the daily choices of the campaign election, but also on Trump’s movements across the United States. Schweickert’s keynote was recorded and is now available on YouTube:

Excessive worries?

On the other hand, there are those who believe that there is no need to overestimate the threat of online political microtargeting. Although exposed to manipulative messages, the public is never totally immersed in a digital bubble. The sources that can influence it are varied and diversified, including the more “traditional” ones like television. And this remains true even in the United States. At the end of this post, we report a bit of bibliography for those who want to explore the topic further.

In general, it would be advisable to avoid all forms of technological causality. Instead of singling out the technology itself as the cause of a social phenomenon, the responsibility should be sought among those who make use of it. Furthermore, one might think that if the architectures and algorithms did exacerbate certain phenomena – like populism and disinformation – it is also true that this problem looks more serious where the digital divide is wider. So the solution cannot be dropping a technology, but to educate to a responsible use.

Filter bubble and echo chamber

Trust is the glue of online social networks. Without it, communication does not develop. The interactions on social networks, however, tend to reinforce convergence (“birds of a feather flock together”). Consequently, it is more likely to find an agreement with those who think like us. It is of such individuals that we tend to trust.

The breakthrough, so to speak, happens when the social dynamics of the networks are replaced by the algorithms. Or, at least, this is the hypothesis of some authoritative scholars of the phenomenon. According to them, in fact, it is precisely the algorithms that create closed information ecosystems, where coming across unwelcome content is relatively unlikely. In addition, the algorithms polarize the positions, because people with different orientations have rare opportunities for confrontation. And anyway, when such confrontations occur, communication tends to become clash.

The phenomenon has long been known as the echo chamber. The expression designates enclosed communication systems, where messages bounce without the possibility of going out and are continuously repeated. The concept of echo chamber seems in some way connected to that of filter bubble, which is more fashionable today. The term filter bubble was coined by Eli Pariser, who used it for the first time in 2011 in an essay that became a classic: The Bubble Filter: What the Internet Is Hiding from You.

The idea is that the algorithms developed first by companies such Google and Facebook with the intent to improve the user experience, have an undesirable effect: they reduce the probability of being exposed to conflicting points of view with our expectations and therefore isolate us within information bubbles.

The real extent of the bubble

As we said, without trying to underestimate the phenomenon, it is still a question of bringing it back to its true proportions and avoiding simplistic readings. A survey of the 2016 Pew Research Center, for example, seems to show that in the online experience the filter bubble effect is much less evident than one might suppose. Users, on the contrary, declare that they often meet information that they do not endorse and that they consider offensive. Other research leads to the conclusion that the echo chambers are also working in traditional media, even more than in social networks.

Nonetheless, it is important to keep studying the phenomenon, also because the situation is in constant evolution. The influence of social networks in our media diet has grown all over the world. And in the meantime, those algorithms continue to change. Starting with Facebook.

EdgeRank of News Feed to the test

In the most popular social network in the world, every conversation tends to lock itself in a bubble. This is the product of the logic that governs EdgeRank, the algorithm why in the News Feed of our Facebook page we see what we see. EdgeRank filters, selects, chooses: it basically shows the user only the contents he is most likely to be interested in.

The technology used is automated learning or machine learning. It means that the more content is published on Facebook, the better the algorithm becomes to understand what of that content interests each user. In short, the algorithm learns from experience. In the video below, Karrie Karahalios, a researcher at Adobe Creative Labs, illustrates how algorithms work in social network feeds:

Even more interesting is Uzma Hussain Barlaskar’s [2] lecture, entitled Impact of Machine Learning on your News Feed.

But beware: the more content circulating, the less likely they are to meet any differences with to our expectations. To be more precise, with respect to the expectations assumed by the News Feed algorithm.

We are therefore faced with a paradox. The number of options available in theory is inversely proportional to the number of options that NewsFeed provides us. The same goes for the content published every day by million users and the actual contents displayed on our page.

The important changes in the logic of News Feed, introduced at the beginning of this year, seem to accentuate the tendency of the algorithm to lock ourselves inside a bubble in which the meeting with the unexpected is a rare event. Facebook intends to offer “guaranteed” content, those that come from a small circle of friends and family. Bad news for publishers and brands, commented the New York Times. In the end, this trend has already been evident since June 2016. It was all announced in a post by Adam Mosseri [3], in the company’s official blog.

The official motivation is harmless. The intentions stated by Facebook are to reduce the visibility of so-called “passive content”, that is news and other editorial material, in favor of everything that “stimulates” the interaction between users. And this result is achieved by promoting exchanges between users who already know each other, or otherwise have exposure to content that has already received similar reactions, comments and sharing.

The role of Facebook Ads

But back to the specific role of Facebook Ads. There are users who have made a very unscrupulous use of microtargeting techniques offered by Facebook Adv. As we said to the Russians proved to be particularly skilled in microtargeting, in the probable attempt to influence the US election campaign, with all the controversy that followed.

The boil burst the last September 6th. Alex Stamos, Chief Security Officer of Facebook, was forced to recognize the inappropriate use of many advertising accounts by Russian news agencies. In his post, published in the Facebook Newsroom, Stamos had to clarify several points.

First of all, it is true that 470 accounts, proved to be false («inauthentic») had been publishing – from June 2015 to May 2017 – about 3,000 ads, for a total investment of 100k dollars. The adds contained messages that were divisive, or able to radicalize the ideological distances between users (immigration, possession of weapons, lesbian-gay communities, etc.)

These announcements were planned using microtargeting techniques, that is, carefully selecting the audience on a geographical basis. They would be viewed by about ten million users overall.

The most disturbing fact is that the accounts seemed to be connected to each other. It is not possible to prove that those accounts are linked to the Kremlin, but according to rumors gathered by the New York Times and Washington Post, this operation of political microtargeting would be orchestrated by the Internet Research Agency, based in St. Petersburg.

Already in 2015, the New York Times had dedicated a documented investigation to the Internet Research Agency. More recently, the Economist also worked on the Russian trolling factory (Inside the Internet Research Agency’s lie machine).

The headquarters of the Internet Research Agency, in St. Petersburg (photo: NYTimes)

In the same period, about 2200 ads with political content were purchased within Russian borders, for a total value of 50k dollars. These advertisements have applied publication methods that do not comply with Facebook rules. In addition, the contents of the ads have been amplified using the troll method, in order to influence EdgeRank.

470 suspicious pages

Obviously, the Facebook Ads accounts are not the only problem: Organic traffic has also played its own part. A document published by the analyst Jonathan Albright highlights the role played by simple status updates and related interactions (likes, comments, sharing), in the dissemination of material of suspicious origin. A role probably much more impactful than the one played by the adverts.

In particular, Albright surveyed 470 Russian pages of Facebook and analyzed the 500 most recent posts for each of them. The posts of the six most active pages, among the 470 analyzed, were visualized 340 million times and generated 19.1 million interactions.

The six pages, which were controlled by the aforementioned Internet Research Agency, were suspended by Facebook:

  • Blacktivists
  • United Muslims of America
  • Being Patriotic
  • Heart of Texas
  • Secured Borders
  • LGBT United

The analysis of the contents and the tone of the posts reveals some important facts.

First, most posts do not touch on issues in direct relation to the presidential elections of 2016. Furthermore, each page addresses a specific audience (black activists, Muslims, gays, patriots and former veterans, etc.) with content that they are likely to gain their trust [4].

The cases of Google and Twitter

On October 9th, 2017, Google also admitted that suspicious posts, worth approximately $ 100k, were purchased on the search engine, YouTube and other sites affiliated to the DoubleClick network by Russian accounts. It was then the Twitter’s turn, which has been blocking 201 accounts linked with the IRA over the past few months, and also linked to the 470 pages of Facebook analyzed by Albright. In addition, three Twitter accounts linked with Russia Today appear to have posted ads for $ 274k.

 

NOTES:

[1] A news and right-wing opinion website, very active in the dissemination of information generated in Russia
[2] Product Manager at Facebook
[3] Another Product Manager at Facebook
[4] Some examples of microtargeting: «There is a war going against black kids», published by Blacktivist; «Share if you believe Muslims have nothing to do with 9/11. Let’s see many people know the truth!», published by United Muslims of America; «At least 50,000 homeless veterans are starving dying in the streets, but liberals want to invite 620,000 refugees and settle them among us. We have to take care of our own citizens, and it must be the primary goal for our politicians!», published by Being Patriotic.

FOLLOW-UP READINGS:

Luca De Biase, La disinformazione online e quello che possiamo fare. Quattrociocchi, Pariser, Menczer, Fournier, Quelch, Rietveld, 22 August 2016.

Seth Flaxman, Sharad Goel, Justin M. Rao, Filter Bubbles, Echo Chambers, and Online News Consumption, “Public Opinion Quarterly”, 80, S1 (1 January 2016), 298–320.

Sean Illing, Cambridge Analytica, the shady data firm that might be a key Trump-Russia link, explained, “Vox”, 18 December 2017.

Tien T. Nguyen, Pik-Mai Hui, F. Maxwell Harper, Loren Terveen, Joseph A. Konstan, Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity, Atti della 23a Conferenza sul World Wide Web (Seul, 7-11 April 2014), ACM, New York, 677-686.

Eli Parisier, The Bubble Filter: What the Internet is Hiding from You, Viking, London – New York, 2011.

Eli Parisier, The Troubling Future of Internet Search, “The Futurist”, 45, 5 (Sep-Oct. 2011), 6-8.

Diana Sanzano, Antonella Napoli, Mario Tirino, Molto rumore per nulla: post-verità, fake news e determinismo tenologico, “Sociologia. Rivista Quadrimetrale di Scienze Storiche e Sociali”, 11, 1, 2017

Frederik J. Zuiderveen Borgesius, Judith Möller, Sanne Kruikemeier, Ronan Ó Fathaigh, Kristina Irion, Tom Dobber, Balazs Bodo, Claes de Vreese, Online Political Microtargeting: Promises and Threats for Democracy, Utrecht Law Review, 14, 1, 2018.