If you walk through the streets of London, your face may be scanned multiple times without you knowing. The patterns that define his face will be crossed in real time with those of a police blacklist supposedly composed of criminals in search and capture and “people who pose a risk to others or to themselves.” In the event of a match, an agent will stop you and identify you. If you verify that you are indeed who the system says, you will be stopped immediately.
This futuristic take on policing has been a part of everyday life in the UK for years. At least since 2016, when it emerged that the London police experimented with facial recognition systems during the Notting Hill Carnival, one of the most popular street festivals in the country. Five years have passed and the British security forces continue to use this technology, which is well established in countries such as China, where it contributes to articulate the iron system of social control, or the United States, whose population is beginning to associate it with racial discrimination.
The application of these systems raises serious questions among privacy activists. British institutions themselves do not have a monolithic position on this. “I am deeply concerned about the potential for real-time facial recognition systems to be used inappropriately, excessively, or recklessly.” Elizabeth Denham said last month, responsible for the British equivalent of the Data Protection Agency. The new Biometrics and Video Surveillance Commissioner, an independent figure who supervises the work of the police, is nevertheless more in favor of facial recognition than his predecessor in office.
It is not known exactly what use is made of these systems in the UK, which, in theory, are in the testing phase. The London Metropolitan Police (Met), the largest body in the country and the one that leads the implementation of facial recognition, has not answered EL PAÍS’s questions about the number of cameras deployed in the city with this technology, the number of arrests successes made thanks to them or future plans. It is known, for example, that at least they were scanned last year the faces of 8,600 people without their consent in a single week at Oxford Circus, one of the busiest spots in the city. And that of the eight people who were stopped, when considering the system that they were suspects, only one was actually wanted by the police, which yields 86% false positives.
According to its website, the objective of the Met’s facial recognition systems is to contribute to “combat the use of violence and the exploitation of minors and help protect the most vulnerable”. The software compares in real time the images taken with those of the list of people under surveillance. It does this by measuring the structure of each face, including the distance between the eyes, nose, mouth, and jaw.
These systems have been developed by the Japanese company NEC. A report from the organization Privacy InternationalHowever, it reveals that the British company Facewatch would have had contacts in 2019 with the Met and with the City Police (the financial district has its own body) to share biometric data on criminals. “It is already wrong for the police to use facial recognition, but on top of that they do it through secret agreements with private companies is totally intolerable,” says Ioannis Kouvakas, legal advisor to the aforementioned organization.
“Anyone can decide to bypass a facial recognition system; it is not a crime nor is it considered an obstruction to police work ”, it is said on the London police website. But nevertheless, a video of Big Brother Watch, one of the local organizations that has fought the most for the withdrawal of these systems, demonstrates how some agents take the data of those who cover their faces when they leave the subway and come across a police van with cameras pointing directly at them.
In an attempt to improve their image, the police themselves asked two academics from the University of Sussex to write an independent report to evaluate the first pilot tests carried out in the city with this technology, carried out between 2016 and 2019. Its conclusions are not encouraging. “It is very possible that the real-time facial recognition test process would have been labeled illegal if it had been taken to court,” it verbatim. The fact that it is not clearly notified that it is being used, the legal doubts that its use raises and the fact that it is not a technology “necessary in a democratic society” underpin the report’s verdict.
Pioneers in Europe
The United Kingdom is not the only European country in which this technology is used for surveillance: a recent report by European Digital Rights (EDRI) indicates that the police in Germany or the Netherlands have also carried out tests in train stations and shopping centers , despite being technically proscribed by the EU (a moratorium weighs on it, although its application is allowed in certain cases). But it can be said that the island of Great Britain is, by far, the place on the Old Continent where the most effort is being made to implement these systems.
Why this interest? Experts point to a number of possible reasons. Among them, that video surveillance seems to be widely assumed by the British. An estimated four million surveillance cameras dot the streets of the country’s cities. In London alone there would be more than half a million, according to official sources. “If Spain wanted to bet on facial recognition, thousands of cameras would have to be installed. In the United Kingdom, the infrastructure is already in place: you just need to update the software ”, illustrates Javier Ruiz, researcher at the Ada Lovelace Institute.
Their historical exposure to terrorism has also played a role. The IRA attacks in the last decades of the last century left two visible marks on the London urban landscape. One is the absence of wastebaskets in public places (this was where the terrorists used to hide the bombs). The other is the multiplication of cameras. In the City, the financial district, the so-called Ring of Steel, a video surveillance system, the most advanced in its time, which allowed the police to take control of all the cameras in the area to follow any car that circulated there. That system now wants to be updated with facial recognition.
The freedom that is given in the United Kingdom to private initiative in matters of security has also encouraged the expansion of this technology. In London, it is common for merchants associations (business partnerships) hire their own private security, which shares photos and data of common thieves and that, according to the case of the development of King’s Cross, sometimes they even implement their own facial recognition systems in collaboration with the metropolitan police. On the other hand, while in Spain Mercadona has had to pay 2.5 million for having installed systems of this type in some of its stores, the popular supermarket chain Co-op has been doing so for some time. as revealed Wired.
Controversial pilot experiences
That Norring Hill Carnival in 2016 was the first recorded experience of police use of facial recognition in the UK. The Met placed vans with cameras and screens at various locations chosen as being the busiest with the stated goal of contributing to public order. Although they got rather the opposite: practically all of the people who stopped did not correspond with who the system had linked them, as it became known later. And they also ignited racial grievance. “Anyone living in the UK knows that Notting Hill Carnival is first and foremost a celebration of black culture. Many groups organized protests against what they considered to be a racist act, ”recalls Ella Jakubowska, coordinator of the facial biometrics program at EDRI, a pan-European NGO that works for the defense of human rights in the digital age.
The South Wales police also ran their own pilot test in two football matches in 2018. They set up vans with cameras equipped with facial recognition systems capable of registering 50 faces per second around a stadium in Cardiff. Not because they knew that suspects were moving around, but because they were places where many people pass. A privacy rights activist brought them to trial and won: the indiscriminate use of this technology collides with the right to privacy and with data protection laws, among others.
“That sentence now sets the guidelines in this activity,” explains Ruiz, who is currently preparing a report on the use of facial recognition. “The jurisprudence is confusing and does not manage to clarify how to apply it. In principle, it does not prohibit it, but it urges whoever uses this technology to justify why they put the camera at a specific point and resolve that the system must have a certain precision so that it does not stop innocent people ”.
Although there is a tradition of hypervigilance in the United Kingdom, knowing that we are being observed profoundly alters our behavior. “That is one of the issues that concerns me most about this technology. The fact that they rob us of the ability to lose ourselves in the crowd should worry us a lot, “says Evan Selinger, professor of philosophy at the Rochester Institute of Technology and scholar of the effects of facial recognition, a topic on which he has written various grandstands in The New York Times.
Also philosopher Carissa Véliz shares that fear. A resident of Oxford, whose university he teaches, he is exasperated that in the ten minutes it takes to walk from home to college, he bumps into at least 20 or 30 cameras. Some of them may be equipped with facial recognition systems. “One of the great cultural and technological advances that the arrival of cities brought with it was being able to have anonymity, very important among other things to be able to protest in the street,” he stresses. Losing it may not seem relevant to us now, he says, because we are not currently living in a dictatorship. “But the day we have it, and these things are cyclical, it will be very difficult to resist an authoritarian regime without anonymity.”