ECREA

European Communication Research
and Education Association

Log in

Genealogies of online content identification

09.05.2019 16:58 | Anonymous member (Administrator)

Special issue of Internet Histories: Digital Technology, Culture and Society

Deadline (abstracts): August 1, 2019

(guest editors: Maria Eriksson & Guillaume Heuguet)

In today’s digital landscape, cultural content such as texts, films, images, and recorded sounds are increasingly subjected to automatic (or semi-automatic) processes of identification and classification. On a daily basis, spam filters scan heaps of emails in order to separate legit and illegit textual messages,1 algorithms analyze years of user-uploaded film on YouTube in search for copyright violations,2 and software systems scrutinize millions of images on social media sites in order to detect sexually offensive content.3 To an increasing extent, content identification systems are also trained to distinguish “fake-news” from “proper journalism” on news websites,4 and taught to recognize and filter violent or hateful content that circulates online.5

These examples reveal how machines and algorithmic systems are increasingly utilized to make complex cultural judgements regarding cultural content. Indeed, it could be argued that the wide-ranging adoption of content identification tools is constructing new ontologies of culture and regimes of truth in the online domain. When put to action, content identification technologies are trusted with the ability to separate good/bad forms of communication and used to secure the value, authenticity, origin, and ownership of content. Such efforts are deeply embedded in constructions of knowledge, new forms of political governance, and not least global market transactions. Content identification tools now make up an essential part of the online data economy by protecting the interests of rights holders and forwarding the mathematization, objectification, and commodification of cultural productions.

Parallel to their increased pervasiveness and influence, however, content identification systems have also been heavily contested. Debates regarding automatic content identification tools recently gained momentum due to the European Union’s decision to update its copyright laws. A newly adopted EU directive encourages all platform owners to implement automatic content filters in order to safeguard copyrights6 and critics have argued that such measures run the risk of seriously hampering the freedom of speech and stifling cultural expressions online.7 High profile tech figures such as Tim Berners Lee (commonly known as one of the founders of the Internet) has even claimed that the widespread adoption of content filtering could effectively destroy the internet as we know it.8 Content identification systems, then, are not neutral devices but key sites where the moral, juridical, economical, and cultural implications of wide-ranging systems of online surveillance are currently negotiated and put to the test.

This special issue welcomes contributions that trace the lineage and genealogy of online content identification tools and explores how content identification systems enact cultural values. It also explores how content identification technologies reconfigure systems of knowledge and power in the online domain. We especially invite submissions that reflect on the ways in which content identification systems are deployed to domesticate and control online cultural content, establish new and data-driven infrastructural systems for the treatment of cultural data, and bring about changes in the activity/status of cultural workers and rights holders. Contributions that locate online content identification tools within a longer historical trajectory of identification technologies are also especially welcomed, since digital content identification tools must be understood as continuations of analogue techniques for monitoring and measuring the qualities and identities of things.

We envision contributors to be active in the fields of media history, software studies, media studies, media archaeology, social anthropology, science and technology studies, and related scientific domains. The topic of contributions may include, but are not limited to:

  • The historical and political implications of content identification tools for audio, video, images, and textual content such as machine learning systems and digital watermarking or fingerprinting tools
  • The genealogy of spam filters, fake news detection systems, and other strategies for keeping the internet “clean” and censoring/regulating the circulation and availability of online content
  • Comparative investigations of the technical workings of different methods for identifying content, including discussions on the challenges and potentials of indexing/identifying sound, images, texts and audiovisual content
  • Reviews of the scientific theories, political ideologies, and business logics that sustain and legitimize online systems of content identification
  • Reflections on historical and analogue techniques for identifying objects and commodities, such as paper watermarks and the use of signets and stamps
  • Issues of censorship related to online content identification and moderation and/or discussions regarding the ethical dilemmas and legal debates that surround content surveillance
  • Explorations of the implications of algorithmic judgements and measurements of identity, and reflections on the ways in which content identification tools redefine what is means to listen/see and transform how cultural objects are imagined and valued
  • Examinations of the relationship between human and algorithmic efforts to identify suspect content online and moderate information flows

Submissions

Abstracts of a maximum of 750 words should be emailed to Maria Eriksson (maria.c.eriksson@umu.se) and Guillaume Heuguet (guillaume.heuguet@sorbonne-nouvelle.fr) no later than 1 August 2019. Notification about acceptance to submit an article will be sent out by 1 September 2019. Authors of accepted abstracts are invited to submit an article by 1 February 2020. Final versions of articles are asked to keep within a 6,000 word limit. Please note that acceptance of abstract does not ensure final publication as all articles must go through the journal’s usual review process.

Time schedule

  • 1 August 2019: due date for abstracts
  • 1 September 2019: notification of acceptance
  • 1 February 2020: accepted articles to be submitted for review
  • Feb-April 2020: review process and revisions

About the guest-editors

Guillaume Heuguet defended a dissertation in 2018 on music and media capitalism based on a longitudinal analysis of YouTube’s strategy and products, including its Content ID system (to be published by the French National Archives in 2019). He is currently an associated researcher at GRIPIC (Sorbonne Université) and Irmeccen (Sorbonne Nouvelle). He runs the music journal Audimat and has edited a forthcoming book entitled Anthology of Popular Music Studies in French (Philharmonie de Paris, 2019).

Maria Eriksson is a doctoral candidate in media studies at Umeå University, Sweden who is currently spending time as a visiting scholar at the department of arts, media and philosophy at Basel University in Switzerland. She has a background in social anthropology and her main research interests concern the politics of software and the role of algorithms in managing the logistics and distribution of cultural content online. She is one of the co-authors of the book Spotify Teardown: Inside the Black Box of Streaming Music (MIT Press, 2019) and has previously co-edited special issues in journals such as Culture Unbound.

Link to the online version of the call for papers: https://think.taylorandfrancis.com/internet-histories-genealogies-online-content-identification/?utm_source=CPB_think&utm_medium=cms&utm_campaign=JOD09539

More information on Internet Histories: Digital Technology, Culture and Society can be found at https://www.tandfonline.com/loi/rint20.

Notes

1 Brunton, Finn. Spam: A Shadow History of the Internet. Cambridge & London: MIT Press, 2013.

2 https://support.google.com/youtube/answer/2797370?hl=en

3 https://www.theverge.com/2018/12/3/18123752/tumblr-adult-content-porn-ban-date-explicit-changes- why-safe-mode

4 https://thenewstack.io/mit-algorithm-sniffs-out-sites-dedicated-to-fake-news/

5 https://www.gouvernement.fr/la-france-engage-une-experimentation-inedite-en-matiere-de-regulation-appliquee-aux-contenus-haineux and https://www.letelegramme.fr/france/internet-des-amendes-pour-les-plateformes-qui-laissent-des-contenus-haineux-21-02-2019-12213979.php

6 http://europa.eu/rapid/press-release_IP-16-3010_en.htm

7 https://www.ivir.nl/publicaties/download/Academics_Against_Press_Publishers_Right.pdf

8 https://www.eff.org/files/2018/06/13/article13letter.pdf

contact

ECREA

Chaussée de Waterloo 1151
1180 Uccle
Belgium

Who to contact

Support Young Scholars Fund

Help fund travel grants for young scholars who participate at ECC conferences. We accept individual and institutional donations.

DONATE!

CONNECT

Copyright 2017 ECREA | Privacy statement | Refunds policy