FAD Magazine

FAD Magazine covers contemporary art – News, Exhibitions and Interviews reported on from London

Rise of the racist robots – how AI is learning all our worst impulses

Rise of the racist robots – how AI is learning all our worst impulses

Current laws ‘largely fail to address discrimination’ when it comes to big data. Photograph: artpartner-images/Getty Images


Powered by Guardian.co.ukThis article titled “Rise of the racist robots – how AI is learning all our worst impulses” was written by Stephen Buranyi, for The Guardian on Tuesday 8th August 2017 06.00 UTC

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.

How could this have happened? The private company that supplies the software, Northpointe, disputed the conclusions of the report, but declined to reveal the inner workings of the program, which it considers commercially sensitive. The accusation gave frightening substance to a worry that has been brewing among activists and computer scientists for years and which the tech giants Google and Microsoft have recently taken steps to investigate: that as our computational tools have become more advanced, they have become more opaque. The data they rely on – arrest records, postcodes, social affiliations, income – can reflect, and further ingrain, human prejudice.

The promise of machine learning and other programs that work with big data (often under the umbrella term “artificial intelligence” or AI) was that the more information we feed these sophisticated computer algorithms, the better they perform. Last year, according to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier.

It has led to a decade-long AI arms race in which the UK government is offering six-figure salaries to computer scientists. They hope to use machine learning to, among other things, help unemployed people find jobs, predict the performance of pension funds and sort through revenue and customs casework. It has become a kind of received wisdom that these programs will touch every aspect of our lives. (“It’s impossible to know how widely adopted AI is now, but I do know we can’t go back,” one computer scientist says.)

But, while some of the most prominent voices in the industry are concerned with the far-off future apocalyptic potential of AI, there is less attention paid to the more immediate problem of how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society. When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.

“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.

We have already seen glimpses of what might be on the horizon. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

These small-scale incidents were all quickly fixed by the companies involved and have generally been written off as “gaffes”. But the Compas revelation and Lum’s study hint at a much bigger problem, demonstrating how programs could replicate the sort of large-scale systemic biases that people have spent decades campaigning to educate or legislate away.

Computers don’t become biased on their own. They need to learn that from us. For years, the vanguard of computer science has been working on machine learning, often having programs learn in a similar way to humans – observing the world (or at least the world we show them) and identifying patterns. In 2012, Google researchers fed their computer “brain” millions of images from YouTube videos to see what it could recognise. It responded with blurry black-and-white outlines of human and cat faces. The program was never given a definition of a human face or a cat; it had observed and “learned” two of our favourite subjects.

Tay, Microsoft’s artificial intelligence chatbot.
Tay, Microsoft’s artificial intelligence chatbot. Photograph: Microsoft

This sort of approach has allowed computers to perform tasks – such as language translation, recognising faces or recommending films in your Netflix queue – that just a decade ago would have been considered too complex to automate. But as the algorithms learn and adapt from their original coding, they become more opaque and less predictable. It can soon become difficult to understand exactly how the complex interaction of algorithms generated a problematic result. And, even if we could, private companies are disinclined to reveal the commercially sensitive inner workings of their algorithms (as was the case with Northpointe).

Less difficult is predicting where problems can arise. Take Google’s face recognition program: cats are uncontroversial, but what if it was to learn what British and American people think a CEO looks like? The results would likely resemble the near-identical portraits of older white men that line any bank or corporate lobby. And the program wouldn’t be inaccurate: only 7% of FTSE CEOs are women. Even fewer, just 3%, have a BME background. When computers learn from us, they can learn our less appealing attributes.

Joanna Bryson, a researcher at the University of Bath, studied a program designed to “learn” relationships between words. It trained on millions of pages of text from the internet and began clustering female names and pronouns with jobs such as “receptionist” and “nurse”. Bryson says she was astonished by how closely the results mirrored the real-world gender breakdown of those jobs in US government data, a nearly 90% correlation.

“People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things,” Bryson says.

So who stands to lose out the most? Cathy O’Neil, the author of the book Weapons of Math Destruction about the dangerous consequences of outsourcing decisions to computers, says it’s generally the most vulnerable in society who are exposed to evaluation by automated systems. A rich person is unlikely to have their job application screened by a computer, or their loan request evaluated by anyone other than a bank executive. In the justice system, the thousands of defendants with no money for a lawyer or other counsel would be the most likely candidates for automated evaluation.

In London, Hackney council has recently been working with a private company to apply AI to data, including government health and debt records, to help predict which families have children at risk of ending up in statutory care. Other councils have reportedly looked into similar programs.

In her 2016 paper, HRDAG’s Kristian Lum demonstrated who would be affected if a program designed to increase the efficiency of policing was let loose on biased data. Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland’s police department. PredPol showed a daily map of likely “crime hotspots” that police could deploy to, based on information about where police had previously made arrests. The program was suggesting majority black neighbourhoods at about twice the rate of white ones, despite the fact that when the statisticians modelled the city’s likely overall drug use, based on national statistics, it was much more evenly distributed.

As if that wasn’t bad enough, the researchers also simulated what would happen if police had acted directly on PredPol’s hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most. That caused still more police to be sent in. It was a virtual mirror of the real-world criticisms of initiatives such as New York City’s controversial “stop-and-frisk” policy. By over-targeting residents with a particular characteristic, police arrested them at an inflated rate, which then justified further policing.

PredPol’s co-developer, Prof Jeff Brantingham, acknowledged the concerns when asked by the Washington Post. He claimed that – to combat bias – drug arrests and other offences that rely on the discretion of officers were not used with the software because they are often more heavily enforced in poor and minority communities.

And while most of us don’t understand the complex code within programs such as PredPol, Hamid Khan, an organiser with Stop LAPD Spying Coalition, a community group addressing police surveillance in Los Angeles, says that people do recognise predictive policing as “another top-down approach where policing remains the same: pathologising whole communities”.

There is a saying in computer science, something close to an informal law: garbage in, garbage out. It means that programs are not magic. If you give them flawed information, they won’t fix the flaws, they just process the information. Khan has his own truism: “It’s racism in, racism out.”

It’s unclear how existing laws to protect against discrimination and to regulate algorithmic decision-making apply in this new landscape. Often the technology moves faster than governments can address its effects. In 2016, the Cornell University professor and former Microsoft researcher Solon Barocas claimed that current laws “largely fail to address discrimination” when it comes to big data and machine learning. Barocas says that many traditional players in civil rights, including the American Civil Liberties Union (ACLU), are taking the issue on in areas such as housing or hiring practices. Sinyangwe recently worked with the ACLU to try to pass city-level policies requiring police to disclose any technology they adopt, including AI.

Samuel Sinyangwe … working to force authorities to disclose when they use technology.
Samuel Sinyangwe … working to force authorities to disclose when they use technology. Photograph: Samuel Sinyangwe

But the process is complicated by the fact that public institutions adopt technology sold by private companies, whose inner workings may not be transparent. “We don’t want to deputise these companies to regulate themselves,” says Barocas.

In the UK, there are some existing protections. Government services and companies must disclose if a decision has been entirely outsourced to a computer, and, if so, that decision can be challenged. But Sandra Wachter, a law scholar at the Alan Turing Institute at Oxford University, says that the existing laws don’t map perfectly to the way technology has advanced. There are a variety of loopholes that could allow the undisclosed use of algorithms. She has called for a “right to explanation”, which would require a full disclosure as well as a higher degree of transparency for any use of these programs.

The scientific literature on the topic now reflects a debate on the nature of “fairness” itself, and researchers are working on everything from ways to strip “unfair” classifiers from decades of historical data, to modifying algorithms to skirt round any groups protected by existing anti-discrimination laws. One researcher at the Turing Institute told me the problem was so difficult because “changing the variables can introduce new bias, and sometimes we’re not even sure how bias affects the data, or even where it is”.

The institute has developed a program that tests a series of counterfactual propositions to track what affects algorithmic decisions: would the result be the same if the person was white, or older, or lived elsewhere? But there are some who consider it an impossible task to integrate the various definitions of fairness adopted by society and computer scientists, and still retain a functional program.

“In many ways, we’re seeing a response to the naive optimism of the earlier days,” Barocas says. “Just two or three years ago you had articles credulously claiming: ‘Isn’t this great? These things are going to eliminate bias from hiring decisions and everything else.’”

Meanwhile, computer scientists face an unfamiliar challenge: their work necessarily looks to the future, but in embracing machines that learn, they find themselves tied to our age-old problems of the past.

Follow the Guardian’s Inequality Project on Twitter here, or email us at inequality.project@theguardian.com

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.




Categories

Tags

Related Posts

Trending Articles

Join the FAD newsletter and get the latest news and articles straight to your inbox

* indicates required