For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Digital Methods Summer School 2021 | 5 - 16 July 2021| Online via Zoom or in-person (as circumstances allow) | New Media & Digital Culture | Call for participation.

This year’s Summer School has as its theme the so-called ‘faking’ and detecting of inauthentic users, metrics and content on social media. The uptick in attention to the study of the fake online could be attributed in the first instance to the ‘fake news crisis’ of 2016, where it was found that so-called fake news outperformed mainstream news on Facebook in the run-up to the U.S. presidential elections that year. That finding also set in motion the subsequent struggle around the occupation of the term from a type of news originating from imposter media organisations or other dubious sources to a ‘populist’ charge against mainstream and elite media that seeks to delegitimate sources found publishing inconvenient or displeasing stories.

In its study we have had calls to cease using the term, fake news. There also has been a variety of classification strategies. Both the expansion as well as contraction of the term may be seen in its reconceptualisation by scholars as well as by the platforms themselves. The definitional evolution is embodied in such phrasings as ‘junk news’ and ‘problematic information’, which are broader in their classification, whilst the platforms appear to prefer the terms ‘false’ (Facebook), which are narrower. 

On the back-end the platform companies also develop responses to these activities. They would like to automate as well as outsource its detection and policing, be it through low-wage content moderators, (volunteer) fact-checking outfits or user-centred collaborative filtering such as Twitter’s ‘birdwatchers’, an initiative they say born of societal distaste for a central decision-making authority, found through qualitative interviews. They also take major decisions to label content by world leaders (and indeed have world leader content policies), which subsequently land platform governance and decision-making in the spotlight.

More broadly there has been a rise in the study of ‘computational propaganda’ and ‘artificial amplification’ which the platforms refer to as ‘inauthentic behaviour’. These may take the form of bots or trolls; they may be ‘coordinated’ by ‘troll armies’, which has been outlined in Facebook’s regular ‘coordinated inauthentic behaviour reports’. As its head of security policy puts it, Facebook defines it (in a roomy and plainspeak manner) as ‘people or pages working together to mislead others about who they are or what they are doing’. Occasionally data sets become available (by Twitter or other researchers) that purport to be collections of tweets by these inauthentic, coordinated campaigners, whereupon scholars (among other efforts) seek to make sense of which signals can be employed to detect them.

Other types of individuals online also have caught the attention of the platforms as ‘dangerous’ (Facebook), and have been deplatformed, a somewhat drastic step that follows (repeated) violations of platform rules and presumably temporary suspensions. ‘Demonetisation’ also is among the platforms’ repertoire of actions, should these individuals, such as extreme internet celebrities, be turning vitriol into revenue, though there is also the issue of which advertisers attach themselves (knowingly or not) to such content. Moreover, there are questions about why certain channels have been demonetised for being 'extremist’. Others ask, is ‘counter-speech’ an alternative to counter-action?

On the interface, where the metrics are concerned, there may be follower factories behind high follower and like counts. The marketing industry dedicated to social listening as well as computational researchers have arrived at a series of rules of thumb as well as signal processing that aid in the flagging or detection of the inauthentic. Just as sudden rises in follower counts might indicate bought followers, a sudden decline suggests a platform ‘purge’ of them. Perhaps more expensive followers gradually populate an account, making it appear natural. Indeed, there is the question of which kinds of (purchased) followers are ‘good enough’ to count and be counted. What is the minimum amount of grooming? Can it be automated or is there always some human touch? Finally, there is a hierarchy in the industry, where Instagram followers are the most sought after, but ‘influencers’ (who market wares there) are often contractually bound to promise that they have not ‘participated in comment pods (group 'liking' pacts), undertaken botting (automated interactions), or purchased fake followers'.

Organisers: Richard Rogers, Guillen Torres and Esther Weltevrede, Media Studies, University of Amsterdam. Application information at https://www.digitalmethods.net