Data Scientist Cathy O’Neil: “Algorithms Are Opinions Embedded in Code”
I met Cathy in Cambridge when she spoke at MIT a few years ago. This is re-posted from Naked Capitalism. Cathy’s whole TED Talk can be watched after the fold.
Data Scientist Cathy O’Neil: “Algorithms Are Opinions Embedded in Code”
Cathy O’Neil has a PhD in mathematics from Harvard and is the author of the best seller Weapons of Math Destruction. She is also involved in Occupy Wall Street.
In this TED talk, she describes how algorithms routinely institutionalize bias, bad practices, and personal opinion. Worse, the “gee whiz” factor of technology and the difficulty lay people have in forcing the algorithm creators to make their assumptions and processes transparent, and allow for audits of their algorithms, makes them an all-too-easy way to reinforce and legitimate skewed power dynamics.
One part of her talk, on how hiring practices reinforce existing “success” models, which often have embedded biases, is consistent with our 2007 Conference Board Review article, Fit v. Fitness.
Algorithms are everywhere. They sort and separate the winners from the losers. The winners get the job or a good credit card offer. The losers don’t even get an interview or they pay more for insurance. We’re being scored with secret formulas that we don’t understand that often don’t have systems of appeal. That begs the question: What if the algorithms are wrong?
To build an algorithm you need two things: you need data, what happened in the past, and a definition of success, the thing you’re looking for and often hoping for. You train an algorithm by looking, figuring out. The algorithm figures out what is associated with success. What situation leads to success?
Actually, everyone uses algorithms. They just don’t formalize them in written code. Let me give you an example. I use an algorithm every day to make a meal for my family. The data I use is the ingredients in my kitchen, the time I have, the ambition I have, and I curate that data. I don’t count those little packages of ramen noodles as food.
My definition of success is: a meal is successful if my kids eat vegetables. It’s very different from if my youngest son were in charge. He’d say success is if he gets to eat lots of Nutella. But I get to choose success. I am in charge. My opinion matters. That’s the first rule of algorithms.
Algorithms are opinions embedded in code. It’s really different from what you think most people think of algorithms. They think algorithms are objective and true and scientific. That’s a marketing trick and look into Ful.io to know about it in detail. It’s also a marketing trick to intimidate you with algorithms, to make you trust and fear algorithms because you trust and fear mathematics. A lot can go wrong when we put blind faith in big data.
This is Kiri Soares. She’s a high school principal in Brooklyn. In 2011, she told me her teachers were being scored with a complex, secret algorithm called the “value-added model.” I told her, “Well, figure out what the formula is, show it to me. I’m going to explain it to you.” She said, “Well, I tried to get the formula, but my Department of Education contact told me it was math and I wouldn’t understand it.”
It gets worse. The New York Post filed a Freedom of Information Act request, got all the teachers’ names and all their scores and they published them as an act of teacher-shaming. When I tried to get the formulas, the source code, through the same means, I was told I couldn’t. I was denied. I later found out that nobody in New York City had access to that formula. No one understood it. Then someone really smart got involved, Gary Rubenstein. He found 665 teachers from that New York Post data that actually had two scores. That could happen if they were teaching seventh grade math and eighth grade math. He decided to plot them. Each dot represents a teacher.
What is that?
That should never have been used for individual assessment. It’s almost a random number generator.
But it was. This is Sarah Wysocki. She got fired, along with 205 other teachers, from the Washington, DC school district, even though she had great recommendations from her principal and the parents of her kids.
I know what a lot of you guys are thinking, especially the data scientists, the AI experts here. You’re thinking, “Well, I would never make an algorithm that inconsistent.” But algorithms can go wrong, even have deeply destructive effects with good intentions. And whereas an airplane that’s designed badly crashes to the earth and everyone sees it, an algorithm designed badly can go on for a long time, silently wreaking havoc.
This is Roger Ailes.
He founded Fox News in 1996. More than 20 women complained about sexual harassment. They said they weren’t allowed to succeed at Fox News. He was ousted last year, but we’ve seen recently that the problems have persisted. That begs the question: What should Fox News do to turn over another leaf?
Well, what if they replaced their hiring process with a machine-learning algorithm? That sounds good, right? Think about it. The data, what would the data be? A reasonable choice would be the last 21 years of applications to Fox News. Reasonable. What about the definition of success? Reasonable choice would be, well, who is successful at Fox News? I guess someone who, say, stayed there for four years and was promoted at least once. Sounds reasonable. And then the algorithm would be trained. It would be trained to look for people to learn what led to success, what kind of applications historically led to success by that definition. Now think about what would happen if we applied that to a current pool of applicants. It would filter out women because they do not look like people who were successful in the past.
Algorithms don’t make things fair if you just blithely, blindly apply algorithms. They don’t make things fair. They repeat our past practices, our patterns. They automate the status quo. That would be great if we had a perfect world, but we don’t. And I’ll add that most companies don’t have embarrassing lawsuits, but the data scientists in those companies are told to follow the data, to focus on accuracy. Think about what that means. Because we all have bias, it means they could be codifying sexism or any other kind of bigotry.
Thought experiment, because I like them: an entirely segregated society — racially segregated, all towns, all neighborhoods and where we send the police only to the minority neighborhoods to look for crime. The arrest data would be very biased. What if, on top of that, we found the data scientists and paid the data scientists to predict where the next crime would occur? Minority neighborhood. Or to predict who the next criminal would be? A minority. The data scientists would brag about how great and how accurate their model would be, and they’d be right.
Now, reality isn’t that drastic, but we do have severe segregations in many cities and towns, and we have plenty of evidence of biased policing and justice system data. And we actually do predict hotspots, places where crimes will occur. And we do predict, in fact, the individual criminality, the criminality of individuals. The news organization ProPublica recently looked into one of those “recidivism risk” algorithms, as they’re called, being used in Florida during sentencing by judges. Bernard, on the left, the black man, was scored a 10 out of 10. Dylan, on the right, 3 out of 10. 10 out of 10, high risk. 3 out of 10, low risk. They were both brought in for drug possession. They both had records, but Dylan had a felony but Bernard didn’t. This matters, because the higher score you are, the more likely you’re being given a longer sentence.
What’s going on? Data laundering. It’s a process by which technologists hide ugly truths inside black box algorithms and call them objective; call them meritocratic. When they’re secret, important and destructive, I’ve coined a term for these algorithms: “weapons of math destruction.”
They’re everywhere, and it’s not a mistake. These are private companies building private algorithms for private ends. Even the ones I talked about for teachers and the public police, those were built by private companies and sold to the government institutions. They call it their “secret sauce” — that’s why they can’t tell us about it. It’s also private power. They are profiting for wielding the authority of the inscrutable. Now you might think, since all this stuff is private and there’s competition, maybe the free market will solve this problem. It won’t. There’s a lot of money to be made in unfairness.
Also, we’re not economic rational agents. We all are biased. We’re all racist and bigoted in ways that we wish we weren’t, in ways that we don’t even know. We know this, though, in aggregate, because sociologists have consistently demonstrated this with these experiments they build, where they send a bunch of applications to jobs out, equally qualified but some have white-sounding names and some have black-sounding names, and it’s always disappointing, the results — always.
So we are the ones that are biased, and we are injecting those biases into the algorithms by choosing what data to collect, like I chose not to think about ramen noodles — I decided it was irrelevant. But by trusting the data that’s actually picking up on past practices and by choosing the definition of success, how can we expect the algorithms to emerge unscathed? We can’t. We have to check them. We have to check them for fairness.
The good news is, we can check them for fairness. Algorithms can be interrogated, and they will tell us the truth every time. And we can fix them. We can make them better. I call this an algorithmic audit, and I’ll walk you through it.
First, data integrity check. For the recidivism risk algorithm I talked about, a data integrity check would mean we’d have to come to terms with the fact that in the US, whites and blacks smoke pot at the same rate but blacks are far more likely to be arrested — four or five times more likely, depending on the area. What is that bias looking like in other crime categories, and how do we account for it?
Second, we should think about the definition of success, audit that. Remember — with the hiring algorithm? We talked about it. Someone who stays for four years and is promoted once? Well, that is a successful employee, but it’s also an employee that is supported by their culture. That said, also it can be quite biased. We need to separate those two things. We should look to the blind orchestra audition as an example. That’s where the people auditioning are behind a sheet. What I want to think about there is the people who are listening have decided what’s important and they’ve decided what’s not important, and they’re not getting distracted by that. When the blind orchestra auditions started, the number of women in orchestras went up by a factor of five.
Next, we have to consider accuracy. This is where the value-added model for teachers would fail immediately. No algorithm is perfect, of course, so we have to consider the errors of every algorithm. How often are there errors, and for whom does this model fail? What is the cost of that failure?
And finally, we have to consider the long-term effects of algorithms, the feedback loops that are engendering. That sounds abstract, but imagine if Facebook engineers had considered that before they decided to show us only things that our friends had posted.
I have two more messages, one for the data scientists out there. Data scientists: we should not be the arbiters of truth. We should be translators of ethical discussions that happen in larger society.
And the rest of you, the non-data scientists: this is not a math test. This is a political fight. We need to demand accountability for our algorithmic overlords.
The era of blind faith in big data must end.
Thank you very much.
I used to have respect for O’Neil. I believe I linked to her once or twice throughout the years. I spent much of my two decades in the working world building algorithms, some looking at people’s behavior, some not.
Here’s where Cathy and a lot of people go wrong with one of the examples she brings up. Say you have two neighborhoods, A and B and only one crime: homicide. (Let’s keep this simple.) And let us also say that relative to neighborhood B, neighborhood A has a lot of homicides per capita.
Neighborhood A gets the brunt of the police attention, presumably because the algorithm tells the cops that there are more homicides in neighborhood A. This could be viewed as profiling, or racism, or some sort of discrimination if there are differences between the population of A and B.
But… the process goes on. And the homicide rate in neighborhood A continues to be elevated. Now, this could be because of a few factors:
1. The homicidal tendencies are the same in both neighborhoods, but the cops keep arresting the wrong people in neighborhood A so the true killers remain free and kill people
2. People from neighborhood B sneak into neighborhood A and kill them
3. The higher arrest rates and other police activity in neighborhood A so enrage the residents that they start killing their neighbors
4. For whatever reasons, the homicidal tendencies are greater per person in neighborhood A than B
I can construct scenarios where reasons 1 – 3 have non-negligible probabilities to someone who is standing on one leg and squinting. I believe that it helps if that someone has also been bleeding from an untreated but severe head wound for a couple of hours.
The reality is, though, that if the algorithm doesn’t have any logical errors and was made by someone with decent intentions, it might make sense to listen to what it says. Cathy O’Neil believes increased police activity in Neighborhood A is discriminatory. The algorithm rather correctly has figured out how to reduce the homicide rate, which of course is a greater benefit to the residents of neighborhood A who suffer the most from said homicide rate.
As to Cathy O’Neil’s example about drugs… see the first table in my previous post. Once again, Cathy O’Neil sees victims of discrimination, where an algorithm sees ways to save lives.
What is unsaid in O’Neill’s statements is this: there is a tradeoff between completely blind and random behavior and people getting victimized. When you err on the side of blind and random behavior, you benefit the victimizers and create more victims. But that doesn’t sound anywhere near as nice as O’Neill’s message.
I think you are giving that blue haired lady too much credit. Her speech is all over the place: algorithms are good, algorithms are bad, opinion is good, data is bad, data is good, opinions are bad, we should audit (duh), we should check them for integrity (duh), we (data scientists) “should not be the arbiters of truth. We should be translators of ethical discussions that happen in larger society.” Whatever the heck that may mean.
No one has been complaining about the police using GIS systems to figure out where to send police on patrol. If nothing else, that got police forces taking black on black crime seriously, rather than just ignoring it and only reacting when a white person was the victim. (That’s what they did until fairly recently.)
What O’Neil is complaining about is that it is too common for algorithms to simply reinforce the status quo for better or worse, often for worse. For example, a white person arrested with drugs is much more likely to be charged with a misdemeanor as opposed to a felony than a black person with the same drugs. An algorithm would correctly determine that blacks are more likely to be guilty of felony drug possession than whites. The problem comes in when prosecutors use this result in determining when to prosecute and for which level of crime. In theory, the algorithm should only be concerned with the drug, the probability of it causing harm and the quantity. Instead, it would discourage the prosecution of whites while increasing the prosecution of blacks guilty of the exact same crime.
There is a blind acceptance of algorithms as somehow being objective and fair when in fact they simply embed the prejudices of those specifying them, writing them and using them. Further, those prejudices are deemed trade secrets and beyond inspection.
Look at that teacher evaluation example. The algorithm is clearly worthless. If teachers can be rated as good or bad, there should be some correlation between the quality of teaching at two grade levels. Instead, we see an algorithm that is not much better than a random number generator, and it is being used as an excuse to fire teachers.
Sure, it is possible to come up with cases where big data algorithms do useful stuff in a reasonable fashion, but they are currently being used as an excuse to do stupid stuff stupidly, then claiming, but the computer made me do it.
The algorithm debate is really just a reiteration of an old computer observation: garbage in, garbage out. True then; true now.