FEED Issue 21

49 FUTURE SHOCK Artificial Intelligence

machine learning to predict the likely future behaviour of offenders. “The classic example of this is COMPAS, in the US, which is used for recidivism. They took lots and lots of penal records and they were looking at who was likely to recidivate, and then gave the results to the judges. Studies have shown that judges have a high percentage of personal bias in sentencing. COMPAS was given to judges as a tool to try to alleviate this problem. However, COMPAS has been shown to be horribly, horribly racist and biased, because of the data that was given to it for training.” The data given to a machine learning algorithm during its training ultimately guides the decisions it will make. The way in which that data can introduce bias is complex and hard to predict. called FAT: fairness, accountability and transparency. It’s basically saying: there’s a problem if you’ve got gobs of data and think, if you can just plug it into machine and get some interesting answer out of it, then it’s useful. One of the problems with machine learning is that people measure accuracy, precision, recall, mean square error, but they’re all mathematical measurements,” Chapman says. “It’s a field of work that I think is incredibly interesting right now. It’s

The challenge, she feels, is teaching a machine not only to tell apples from oranges, but also ensuring it is fair, or that its results are used fairly. “Fairness is a social and ethical concept; in machine learning we often treat it as a statistical concept. Often, when we abstract these problems, the things we are measuring are accuracy or mathematical concepts, but ethics and fairness are not mathematical concepts. We need to be able to correct for this.” In 2019, this is still a work in progress. “That’s where the research community is right now. We know certain types of biases. The question is, can we predict how data with different types of known biases affects different types of question?” Apologising for the Rumsfeldism, Chapman admits: “There’s always going to be those unknown unknowns, but the community is trying to understand. By looking at the data itself, can we at least advise on how it is biased and how that’s likely to affect some of the outcomes.” REGULATION One thing that comes up in any business discussion of AI’s risks and benefits is a fear that the heavy hand of regulation might imminently descend. InfoNation’s Coleman argues that the UK government is quite

progressive in this area. Professor Dame Wendy Hall, Regius professor of Computer Science at the University of Southampton, sits on a UK AI and ethics panel and the UK is arguably at the forefront of governmental commitment to the issue. Coleman explains: “The problem you’re going to have is that, in the same way the internet defies jurisdictions, the web is where this stuff is going to be working. You’re not going to be able to control it at a jurisdictional level. It’s not like encryption; the pace at which these machines will learn and make decisions and do things is such that you can’t account for it in regulation. You need guiding principles and you need a big enough stick. You do need a form of regulation, but it needs to be different. There’s huge amounts of excitement about the value and the intellectual property created. The people in government and the regulator are always going to be behind. The only way to learn is through trial and error. What we want to prevent is such errors as are catastrophic.” Chapman’s focus is on exactly that sort of prevention. “Ultimately, I think machine learning is going to be much like a car,” she says. “It’s a powerful, useful tool, but we’re not at self-driving level yet. I think we as a society, and government regulations, are going to have to think through what happens when things go wrong.” Describing her position as a middle ground, Chapman concludes: “The community is starting to become aware of the problem that, socially, it is unacceptable for a tool to do certain things. But we want the tools. A chunk of research is to figure out how to catch and change what we do in terms of machine learning, so problems are less likely to happen.”

THE PACE AT WHICH THESE MACHINES WILL LEARN AND MAKE DECISIONS AND DO THINGS IS SUCH THAT YOU CAN’T ACCOUNT FOR IT IN REGULATION

feedzinesocial feedmagazine.tv

Powered by