The Encoding of Race through the Technologization of Security
In an era of unprecedented techno-scientific development, Artificial Intelligence (AI) systems are increasingly used by states’ governments for security purposes. The securitization of borders has undergone a remarkable transformation with the introduction of digital biometric identification systems, which have the purpose of strengthening the identification of foreign nationals and increasing the deportations of illegal migrants. On the one hand, is the possibility of overcoming the limits of human judgement and racial prejudices. On the other, is the chance that the state can secure its borders through the power of automated algorithms. Both have paved the way to increasing interdependence of state’s security and technology.
However, what this narrative obscures behind the veil of technological neutrality is the power structure that underlies the production and use of identification technologies, and in which race plays a prominent role (Magnet, 2011). Biometric, in fact, operate within a Western political-epistemological horizon, functioning as a power technology of racial ‘othering’ which is necessary for the very existence of the capitalist state. This essay will argue that AI and racism, far from being mutually exclusive, are inextricably linked through the encoding of a racialised, or ‘epidermal’, way of thinking in digital biometric systems of identification. Beneath this technological face, is a system rooted in systematic racial discrimination and subjugation. Race, thus, is to be understood as the third pole of the sphere composed by technology and security, leading to the understanding of the border as a site of what Annibal Quijano ‘coloniality of power’.
Firstly, the underlying epistemological assumptions that inform digital biometrics and their racializing logic will be presented, through mainly Browne’s concept of ‘digital epidermalisation’ and ‘prototypical whiteness’. In this way racism will be contextualised within the classificatory logic of biometrics systems, that creates racialised identities based on physical attributes. Secondly, the case of biometrics at the EU border will be presented, highlighting the way in which the increased use of biometrics for racial profiling of immigrants exacerbates already present discriminatory practices in the European context, fuelled by the 9-11 attack and the 2015 ‘migration crisis’. Lastly, Quijano’s notion of ‘coloniality of power’ will elucidate and contextualise the functioning of biometrics within a broader socio-political framework as a technology which reproduces racialised colonial relations of knowledge and subjugation. All of this can be conceptualised as the enhancing of a modernist rationalising impetus, that is the idea that the totality of the social reality can be captured and comprehended within a monolithic (Western) knowledge paradigm, based on the ordering of reality through fixed classifications and categorisations. An idea that, according to Quijano and several postcolonial thinkers, was the basis of the creation of a hierarchical global power structure of domination.
Producing Race Out of the Body
Underpinning AI systems – which work through machine learning algorithms – is a precise vision of the world which informs the mechanisms of production of knowledge of automated systems. What is essential to these automated systems of machine learning is their (human-driven) power of classification. In other words, the power that AI has of defining reality in distinctive categories, based on what is ‘fed’ into the machine; machine learning algorithms process data, learning how to recognise patterns and categorise the world based on initial data input, that are always inherently incomplete. It is this classificatory logic that is inherently problematic. Every act of classification is, in fact, socially and politically laden, and dependent on a previous choice on how reality should be broken down in order to make it intelligible and manipulable (1). Every kind of classification, though, presents itself as absolute, reifying cultural and contextual identities, such as notions of gender, race, and sexuality, in quantified data sets. This results in the algorithmic production of a unified and new ‘truth’ about the world, people and their identities. This is not to say that classifications should be altogether deleted as an intrinsic form of domination, rather the question should always be on their relationship with the system of power in which they work and are produced, that produces exclusion and violence as a consequence of their inevitable reductionist nature.
This production of knowledge shows racial biases in the case of biometric systems of identification, used especially for security purposes at the border, to detect criminals or unwanted immigrants based on their racial profile. Digital biometrics actually function by classifying and identifying human beings based on their physical-biological attributes, mirroring historical pseudo-scientific racial theories and practices which rely on the biological origin of race. The body is, thus, fragmented into its parts, analysed, encoded, and classified into distinctive categories to sort into databases and to verify the individual’s identity at the border. Facial features, such as the spacing between the eyes, dimensions of nose and mouth, are used to classify individuals into distinctive racial categories. In this sense, the biometric body functions as evidence of its racial ‘truth’, that is unquestionably attached to it. This is what Browne, drawing on Fanon’s concept of ‘epidermalisation’, calls “digital epidermalisation”: the imposed regimentation of the body into a predetermined racial category, that results in the alienation of the individual from its own subjectivity and self-definition (2). The Self is, therefore, made uncertain: if subjectivity is denied ‘all is permitted’. These kinds of measures, if, on the one hand, are necessary to ensure the checking of people crossing the border, on the other hand, they often result in clear violations of migrant human rights, such as privacy, that also unequally impact the migrant depending on its origin.
Furthermore, Browne underlies the privileging of whiteness in biometric technologies, since video cameras that provide images to the facial scan system are optimised for lighter skinned users. She talks about “prototypical whiteness” to refer to the normalisation of white or lighter colours in identification technologies, that clusters darker colours at the end of the colour spectrum. As Browne puts it “this prototypical whiteness is one facet of the cultural and technological logic that informs many instances of the practices of biometrics and the visual economy of recognition and verification” (3). A racial norm is, therefore, created and encoded in machine learning systems, that simultaneously produces the non-classifiable or illegible, which is incapable of conforming to the norm (4). Discrimination is therefore inevitable.
Digital Biometrics at the EU Border
Given the racial assumptions upon which digital biometrics are based, their increasing use for border surveillance and identity checks cannot pass non criticised, raising questions on the role played by technology in the relation between state’s security and racial profiling. In the last two decades, in fact, the European Union, through a series of policies, legal frameworks and initiatives, has increasingly deployed digital biometric systems for the purpose of “combating identity fraud and increasing the number of deportations”. This resulted especially from the 9-11 terrorist attack, which inaugurated a security narrative where ‘terrorism’ and ‘migration’ have become mutually implicated, leading to the militarisation of borders and the reduction of immigrants and refugees to threats to the nation state’s sovereignty and identity. In this context, skin colour has increasingly become a proxy for immigration status, by equating the ‘non-White’ with a criminal status, legitimising its exclusion. The criminalisation of African and Arab populations especially has peaked; security concerns have led to the racial ‘othering’ of migrants, a process that Bunyan defined a “classic case of institutionalised racism” (5). In fact, the racial discrimination that these populations face was especially highlighted when, after the Russian-Ukrainian war erupted, the Temporary Protection Directory was implemented, to ensure the welcoming and inclusion of Ukrainian migrants. The same measure was not adopted in the case of the 2015 Syrian crisis.
Identification technologies thus operate in a socio-political environment in which racial and ethnic profiling is endemic. Despite the claim that biometrics would eliminate racial profiling through mechanical objectivity, they are based on assumptions that ethnically label individuals based on their phenotypic markers, leading to the exacerbation of racial discrimination. The use of biometrics for identification purposes through body ethnic-racial recognition is therefore far from neutral in its purposes, contributing and exacerbating the racial ‘othering’ of minority groups, behind the veil of national security and technological efficiency. In the aftermath of the War on Terror, in Europe, every category of ‘third-country national’ attempting to cross the border had their biometric data captured and recorded, a process that expanded during the 2015 ‘migration crisis’. The objective was to achieve a one hundred percent fingerprinting rate to insert into the Eurodac database, into which biographic information and facial images were added. Biometric data was then used to verify identities of foreign nationals, by comparing biometrics against a European interconnected system of recorded data, the Common Identity Repository (CIR).
This system of identity checking did not affect all individuals in the same way, with minorities disproportionally targeted by the police based on their race, perceived nationality or ethnicity, or physical appearance (6). Migrants, asylum seekers and other minority groups continue to be systematically identified through facial recognition technologies, with ethnicity providing the basis for police stops. In one instance, the Racial Justice Network in 2020 found that, in Europe, for every White European in every 10,000 people, 48 Arabic, 14 Black, 14 Asian, and almost 4 Chinese people are stopped and scanned, even though no real proof of their criminality was found (7). This strengthens the immigrant-criminal equation within algorithmic calculations. In this context of disproportionate racial profiling, a nexus between the implementation of biometric technologies and the increase of practices of racial discrimination can be found, affecting former colonial subjects.
Biometrics and the Coloniality of Power
Biometrics can therefore be contextualised within the legal and political efforts made by the EU to fortify external borders, limiting the access of migrants and asylum seekers coming especially from the Global South, redrawing what Du Bois coined the ‘global colour line’ (8). The use of biometric measurements to deduce and produce migrants’ identities based on their physiognomic traits (or ‘racial nature’) culminated in a transnational effort of racial classification and categorisation of human beings.
This classificatory impetus should be understood as the product of the ‘coloniality of power’, based upon “’racial’ social classification of the world under Eurocentred world power” (9). This permits to contextualise both the epistemology and operationalization of digital biometric systems within a neo-colonial power structure in which race becomes an ordering dispositive of control and domination. The racial ordering of the world, as Quijano explains, is, on the one hand, the basement upon which the notions of nation-state, citizenship, and democracy are based, and, on the other, the product of the European paradigm of rational knowledge. Knowledge, in this paradigm, is understood within the dichotomy subject-object, that expresses the deeper dualism of reason-nature, where Europe or the West occupies the first pole, reducing all other cultures and populations to the status of objectified ‘others’. The building of an epistemic (racial) ‘Truth’ about the world’s populations goes hand in hand with their physical domination and with the maintenance of a global hierarchical structure of power. The Western monopoly of rationality results in the universalisation and de-politicisation of the knowledge produced about the objectified ‘other, obscuring the power relations underlying it.
This paradigm of knowledge production can be encoded in digital biometric systems, which function precisely by classifying individuals in discrete categories, naturalising differences in a “machine Neoplatonism”, the idea that “machine learning has revealed a hidden mathematical truth that is beyond human questioning and verification” (10). The knowledge produced by machine learning algorithms is, therefore, compared to a divine knowledge that transcends human limits of knowledge, maintaining it in the realm of unquestionability. What is obscured by this narrative, is that machine learning systems function through data sets that are produced by humans, and that are themselves bias, from the moment that they will always present a relative account of the social world, that is unable to fully describe it in its multiplicity. As such race is granted an unprecedented degree of naturality and truth through its digital production, working as a discriminatory criterion between who can have access to the sovereign Western state and who cannot. Race and technology ultimately then become modes of governance, maintaining a racial division both within European borders and between Europe and the Global South. Consequently, the Western state is the source of laws, policies, and narratives that inform the functioning of machine learning algorithms as means of racial identification of human beings, producing and reinforcing existing global inequalities on the line of colour.
Conclusion
This article has argued that AI and racism are mutually interdependent, moving away from the assumption that the new technology frontier will finally overcome human fallibility, in turn, freeing humanity from its history of cultural and political discrimination and violence. On the contrary, the kind of knowledge that informs the functioning of machine learning systems, that proceeds through taxonomic representations of the world, sinks its roots in a modernity-rationality paradigm. A knowledge that for centuries has accompanied and permitted the Western colonial domination of the world.
The case of biometric identification at the border raises, thus, severe ethical considerations, that should not only be limited to technical adjustments, but should lead to a radical questioning of whether certain technologies in certain contexts should be built at all.
References
Amnesty International, ‘Hotspot Italy: How EU’s flagship approach leads to violations of refugee and migrant rights’, Amnesty International, 2016, https:// www.statewatch.org/media/documents/ news/2016/nov/ai-hotspot-Italy.pdf
Browne, S. (2009) Digital Epidermalization: Race, Identity and Biometrics, Critical Sociology, 36 (1), pp. 131-150.
Bunyan T. “The Point of no Return”, July 2018, Statewatch, https://www.state- watch.org/media/documents/analyses/ no-332-eu-interop-morphs-into-central- database-revised.pdf
Crawford, K. (2021) Atlas of AI, Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press: New Heaven.
Du Bois, W. E. B (2007) The Souls of Black Folk, Oxford University Press: Oxford
Foucault, M. (2007) Security, Territory, Population: Lectures at the Collège de France, 1977–78. Translated by G. Burchell. Palgrave Macmillan: New York, NY.
Gordon, L.R. (2004) Is the Human a Teleological Suspension of Man?: Phenomenological Exploration of Sylvia Wynter’s Fanonian and Biodicean Reflections. http://web.ics.purdue.edu/~mmichau/
Human Rights Watch, “Greece: New Biometrics Policing Program Undermines Rights. Risk of Illegal Racial Profiling and Other Abuses” January 18, 2022, https://www.hrw.org/news/2022/01/18/greece-new-biometrics-policing-program-undermines-rights
Huysmans, J. (2006) The Politics of Insecurity: Fear, migration and Asylum in the EU, Routledge: Oxfordshire.
Lockhart, J. W. (2023) “Because the Machine can Discriminate: How Machine Learning Serves and Transforms Biological Explanations of Human Difference”, Big Data and Society, 1-14, DOI: 10.1177/20539517231155060
Madianou, M (2019) The Biometric Assemblage: Surveillance, Experimentation, Profit, and the Measuring of Refugee Bodies, Television and New Media, 20 (6), pp. 581-599.
Magnet, S. A. (2011) When Biometric Fail. Gender, Race and the Technology of Identity, Duke University Press: London.
Quijano, A. (2007) COLONIALITY AND MODERNITY/RATIONALITY, Cultural Studies, 21 (2-3), 168-178, DOI: 10.1080/09502380601164353
Racial Justice Network “STOP THE SCAN: Police use of mo- bile fingerprinting technology for immigra- tion enforcement”, Racial Justice Network, 6 March 2021, https://racialjusticenetwork.co.uk/2021/06/03/police-scanning-report/
Smith, C. (2019) ‘Authoritarian Neoliberalism’ and the Australian border-industrial complex, Competition and Change, 23 (2), pp. 192-217, DOI: 10.1177/1024529418807074
Statewatch, “Building the Biometric State: Police Power and discrimination”, February 2022, statewatch.org