When the algorithm that alerts Social Affairs believes that black families are dysfunctional | Digital Transformation | Technology

A family enters their apartment in a public housing block in New Orleans, United States.Mario Tama / Getty

The New Zealand government commissioned two economists a decade ago to develop a statistical model to calculate the chances of a newborn being abused during the first five years of life. The program was made public in 2014. A group of researchers revealed that it was wrong in 70% of the cases. The plan was stopped in 2015, before it was applied to a sample of 60,000 newborns. By then, the developers of that system had already secured a contract to create a similar risk prediction model in Allegheny County, Pennsylvania.

The system assigned a score to each pregnancy based on the analysis of 132 variables, such as the length of time that public aid had been received, the mother’s age, whether the child was born into a single-parent family, or mental health and health. the parents’ criminal records. When the value returned by the algorithm was high, social workers received a warning: you have to check that everything is in order in that family. Although it was a false alarm, each home visit is registered in a country in which custody of a child can be withdrawn for not having a full refrigerator.

The political scientist Virginia Eubanks (New York, 1972) has investigated the use of this tool and concludes that its use is especially damaging to the working classes and the poor, which in the American context means a poor representation of blacks and Latinos. And he tells it in his book The automation of inequality, which Captain Swing puts on sale in Spanish this Monday. Its English edition received awards, praise from authors such as Naomi Klein or Cathy O’Neil and good reviews from media such as The New York Times.

One of the reasons for this effect is that the data used by the algorithm to score the risk of abuse is collected from public records, and in the United States only the poor interact with public institutions. The result of the scores would be very different if the system also included private data: if parents use babysitters (and therefore are absent), records of paid psychologists and psychiatrists, attendance at Alcoholics Anonymous or luxury detox centers … .

Eubanks breaks down this and two other examples of the use of algorithms in decision-making in public institutions in his book. The selected case studies, all from the United States, demonstrate that technology is not synonymous with aseptic efficacy, but rather the opposite. “These tools, supposedly built to eliminate inequalities and achieve neutrality, what they really do is amplify the inequalities that we have in society,” explains the author by video call to EL PAÍS.

Virginia Eubanks, during an interview with the public in Utrecht, the Netherlands, in 2019.
Virginia Eubanks, during an interview with the public in Utrecht, the Netherlands, in 2019.Sebastiaan ter Burg / Sebastiaan ter Burg

Criminalize poverty

In addition to Allegheny’s predictive abuse model, Eubanks analyzes a system applied in Indiana to automate eligibility (that is, who can be a beneficiary) of welfare programs. The problem is that it was designed in such a way that when there was an error in the process or paperwork, it was always attributed to the applicant. The result was the denial of one million benefits. It didn’t take long for especially bloody cases to come to light of potential beneficiaries who were denied aid, such as six-year-old girls from humble families or grandmothers hospitalized for heart problems who had their free medical service (Medicaid) withdrawn . It ended up being canceled, but spending on social benefits is lower today than ever.

The other case study is a computerized system that decides who of the 60,000 homeless in Los Angeles County in 2017 should receive help, knowing that 25,000 would be left unattended. According to the designers of the system, this would help to objectively weigh the most pressing cases to allocate resources to them. “There is another way to understand it,” Eubanks writes about it: a cost benefit analysis. “It is cheaper to provide reintegration support housing to the most vulnerable and chronically homeless people than to keep them in emergency rooms, mental health facilities or prisons. And it is cheaper to provide the less vulnerable homeless people with a rapid relocation that involves a small investment and limited in time than to let them become chronically homeless ”.

More information

The common nexus of the cases that Eubanks details in the book is, in his opinion, that in all of them the starting point is the criminalization of the least favored. And that has to do with the American conception of the less well off. “We must break the deeply rooted idea that we have in the United States that poverty is the result of moral failure,” says the professor of Political Science at the University of Albany. The defenders of the use of algorithms in the management of public services, he says, assure that the big data it revolutionizes rigid bureaucracies, stimulates innovative solutions and increases transparency. “But if we focus on programs especially targeting the poor and working class, the new data analytics regime is more of an evolution than a revolution. It is an expansion and continuation of the strategies of moralistic and punitive management of poverty that have accompanied us since 1820 ”, he states.

Winds of change

The pandemic has shown that discriminatory treatment of the poorest exists. When COVID-19 broke into the United States, the computer systems that process and grant unemployment benefits were overwhelmed by demand. “In response, many states relaxed the requirements to show that you need the help. In other words: when the crisis affects the middle classes, suddenly we no longer need such complex rules to access social benefits ”, Eubanks illustrates.

A man walks past homeless tents on a Los Angeles street in April this year.
A man walks past homeless tents on a Los Angeles street in April this year.FREDERIC J. BROWN / AFP

Expanding the number of users of automated social benefit systems is not the only novelty that the pandemic has brought. The academic believes that American society is beginning to generate a debate about the use of technology in different areas of society. “Now there are many people who are thinking more critically what digital surveillance tools such as facial recognition or the fact that the police carry cameras means and what effect on their lives,” he says. This reflection is led by social movements led by young African Americans, says Eubanks. “It is the fruit of a combination of concern about the operation of opaque algorithms, the racist legacy of the American police, and suddenly facing tremendous financial suffering. That creates a moment of certain social alignment that allows us to review everything ”.

Beyond the United States

For the algorithms to stop deciding whether a child’s custody can be removed from their parents, Eubanks believes, it is necessary to internalize that these programs are not neutral. “If we want to build better tools we have to do it with better values ​​from the beginning, because if they don’t end up having the ones we already have, and therefore we shouldn’t be surprised that they produce the same results over and over again,” he reflects.

The systems described in the book need not be confined to the United States. The professor is now working in fact to study the patterns followed by the application of algorithms in the field of public decisions that have to do with social benefits.

“The infrastructure for these systems to work is ready in many parts of the world,” he says. “What change are the political regimes, moral austerity or the moments in which you look the other way. Businesses make these tools and government agencies treat them as solutions to austerity. I don’t think they will disappear on their own ”.

You can follow EL PAÍS TECNOLOGÍA at Facebook Y Twitter.

Related Articles

Latest Articles