New Technologies, Old Discrimination

An Essay by Frederike Kaltheuner and Nele Obermüller

The utopia of cyber freedom is showing its darker side more and more clearly. Because the data that everyone generates every day is used in very different ways – depending, for example, on where you live and how much money you have at your disposal. Technological progress is thus taking place on the backs of people who are marginalised due to discrimination and fewer privileges. Frederike Kaltheuner and Nele Obermüller explain the damage that technologies can do in their essay.

In the imagination of the techno-utopians of the 1990s, the internet would create a bodiless space – a space that was both equalising and equally accessible to all. “We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth,” proclaimed John Perry Barlow in his 1996 manifesto “A Declaration of the Independence of Cyberspace”. Back in those early days of cyber-libertarian fantasy, much less was said about what would happen once you were online, or how exactly people from around the world with different economic means would get online in the first place. 

As of October 2019, more than half of the world’s population – a staggering 4.48 billion people – are online. For many of these people, access to online spaces was made possible by cheap smartphones that bring low-cost internet to emerging markets. But inexpensive technologies often entail hidden costs. Many cheap phones get shipped with poor security and some even harvest people’s data by design and by default. For instance, in 2018 the “Wall Street Journal” reported that a popular smartphone sold in Myanmar and Cambodia, the Chinese-made Singtech P10, comes with a pre-loaded app that cannot be deleted and that sends the owner’s live location to an advertising firm in Taiwan. The hidden cost, therefore, is often access to people’s data.

Privacy creates the safe space within which we are not judged, assessed or categorised.

We are living in a world where almost everything we do automatically generates data, whether we are going somewhere, meeting someone, purchasing something or simply wasting time. Even aspects of human life that were formerly unsurveyed and unquantified are now turned into data points that are collected, aggregated and collated. We are all data, as John Cheney-Lippold has said, but this ‘we’ is not a uniform entity; it is characterised by both marginalising and privileging differences. One way in which this manifests itself is that data exploitation is often baked into the infrastructures and technologies that are disproportionately sold to those who are more likely to be adversely affected by its abuse: poorer people, and those who are new to the internet. Another manifestation is how surveillance systems are designed and deployed. If we look back at the history of surveillance, we find that marginalised groups and those who were discriminated against were often watched more closely than others. In 18th century New York City, for example, the so-called lantern laws obliged black, mixed-race and indigenous enslaved people to carry candle lanterns with them when they walked about the city after sunset alone without a white person. Marginalised communities have also always been of special interest to the surveillance apparatus: the so-called Rosa Listen were used by police departments in Germany even after homosexuality was decriminalised in 1969; while over in the U.S. the first Director of the FBI J. Edgar Hoover had the bureau keep extensive dossiers on social movements and political dissidents.

But with new technologies, old inequalities are reappearing in novel and unexpected forms. A good example of this is facial recognition. Most facial recognition systems still perform best at recognising the faces of white men. Joy Buolamwini, a researcher from the MIT Media Lab tested commercially released facial-analysis made by Microsoft, IBM, and the Chinese company Face+++, and found that all systems were very good at identifying the gender of lighter-skinned men. However, darker-skinned men were misclassified six per cent of the time and darker-skinned women as often as 30.3 per cent of the time. In high-stakes areas such as law enforcement, misidentification could implicate people in crimes they did not commit. But even in seemingly mundane environments – such as football stadiums or concert halls – surveillance is both Orwellian and Kafkaesque, as automated misidentification shifts the burden of proof on the falsely recognised individuals, who suddenly find themselves needing to prove that they are who they say they are and not whom a system says they are.

In an increasingly automated world where everything is turned into data, what is at stake is the distribution of power between people, the market and the state.

Recently, awareness for bias baked into the design of technology has been increasing and for the past two years, fixing in-built discrimination has become a key priority for technology companies and researchers alike. But building systems that are better in terms of parity will not necessarily lead to greater justice or to less discrimination. Let us return to the example of facial recognition. In his essay “Against Black Inclusion in Facial Recognition”, software developer Nabil Hassein states: “I have no reason to support the development or deployment of technology which makes it easier for the state to recognise and surveil members of my community.”  His point alerts us to the fact that discrimination and bias do not only happen while a technology is used— they also happen before and after its use. Systemic injustices as well as individuals’ views and assumptions influence which products and services are designed and built, who builds them, how they are used and how their results are interpreted and applied. As the historian Melvin Kranzberg proclaimed in 1985 already: “technology is neither good nor bad; nor is it neutral.”

Both low-cost smartphones and facial recognition systems are examples of how harms amplified by technologies tend to disproportionately affect those who are already marginalised. As such, technology policy is tightly intertwined with social and global justice issues – and it should be recognised as such on a policy level. Yet, there has been a strange tendency to treat the tech industry as fundamentally different to other sectors. It would never occur to us to regulate pharmaceutical companies through non-binding and unenforceable ethical guidelines. We do not expect oil companies to self-regulate when it comes to complying to environmental protection. And in hardly any sector do we place the same amount of the burden of responsibility on individuals to protect themselves. When we go to restaurants or buy food at the supermarkets, we do not come equipped with food-safety testing kits – we trust that what we buy is safe.

We should perhaps ask ourselves what kind of innovation and progress we as a society and as individual voters really want: innovation that benefits the few, or that benefits the many?

Privacy creates the safe space within which we are not judged, assessed or categorised. It is the space in which we can develop our identity, change who we are and decide who we want to become. But what is at stake is more than individual privacy. In an increasingly automated world where everything is turned into data, what is at stake is the distribution of power between people, the market and the state. That’s why perhaps one of the most pressing tasks of this decade - next to the climate crisis and growing inequality - is to vigorously defend our rights, as well as the norms and rules that should govern powerful technologies, the companies that build them and the governments that deploy them.  Governments – especially democratic ones – need to resist the temptation to undermine civil liberties in the name of safety and security. When it comes to empowering individuals vis a vis technology companies, the clearest way is through laws and regulations. Rules that govern how data can be used, for instance, do much more than simply protect people’s data: they also mitigate some of the informational asymmetries that exist between people and the technologies they rely on. 

Binding laws and regulations are often cast as a threat to technological progress and innovation. Indeed, if progress is taken to mean “moving fast and breaking things”, as one of Facebook’s early mottos declared, then they most definitely would. But if the tech scandals of the past three years have taught us anything, it is that breaking things comes with collateral damage, the price of which we have to pay collectively. Instead of pitting regulation against innovation, we should perhaps ask ourselves what kind of innovation and progress we as a society and as individual voters really want: innovation that benefits the few, or that benefits the many? Do we want progress toward a world in which democracy and human rights can flourish, or progress toward a world in which they are under increasing duress? There are many truly exciting and ground-breaking possibilities that emerging technologies can enable. It is up to us to ensure that we are creating the right conditions for a better world to be possible.

Frederike Kaltheuner is a civil rights activist and works as a writer in London. Until 2019 she was head of the data abuse department of the international civil rights organisation Privacy International, based in London. Since 2019 she has been a Tech Policy Fellow of the Mozilla Foundation. She studied Internet Science in Oxford and Philosophy, Politics and Economics in Maastricht and Istanbul. As an expert witness, she has testified at hearings in the British and European Parliament on topics such as artificial intelligence and data ethics. As an expert on new technologies, Kaltheuner is a regular guest on numerous television formats, including “BBCNews” and “AlJazeera”. 

Nele Obermüller is an author and freelance journalist. She works in German and English. Her articles have appeared on “Deutsche Welle”, “The Guardian”, “Food” & and “Vice”, among others, and she has written for the UNHCR and the European Commission. Obermüller studied Criminology in Cambridge and Psychology, Philosophy and Cultural Studies in Sussex and Berlin. She has received several awards for her journalistic work, including the Guardian International Development Journalism Award. She lives in Berlin.

An earlier version of this text was published in German language in the book “Datengerechtigkeit” by Frederike Kaltheuner and Nele Obermüller (Nicolai Publishing & Intelligence GmbH, Berlin 2018).

 

This text was published in the  HAU publication for the festival “Spy on Me #2 – Künstlerische Manöver für die digitale Gegenwart”.