Article

When AI learns bad habits

New types of risk can emerge when artificial intelligence emulates human decision-making

Artificial intelligence can reduce business risk by curbing the scope for erratic human decision-​making. However, the process of building an AI can lead to very human risks.

Typically, AIs learn by ingesting a vast database of human decisions and working out what factors were used to make them. Unfortunately, we’re not all angels. The latest demonstration of this fact came in September when Amazon was shown to have dramatically scaled back a trial AI that was being used to scan resumes for talent. Based on previous decisions, the AI taught itself to downgrade resumes that included the word ‘women’s’, gave a negative weighting to graduates of two all-​female colleges, and upgraded candidates who used certain verbs employed more frequently by men, according to a Reuters report.

This type of discrimination is typically hidden and very difficult for candidates to challenge, as they only learn the outcome of their own applications. In the US, civil rights groups are stepping up their scrutiny of AI-​powered recruitment for this reason, potentially putting companies at risk of having to pay substantial damages for historical recruitment policies.

There is a dismal scenario where AI entrenches discrimination by denying loans to ethnic minorities or giving preference to white males in the jobs market. In the criminal justice system it could unfairly target minorities or deny rights to the disabled.  Similar risks could be exposed in other areas: for instance, an AI controlling a complex industrial process could emulate – and therefore expose – human operators who cut corners on safety procedures. An AI adjusting prices for an online retailer might pursue predatory pricing policies that could open its creator up to legal action if discovered by competitors.

To combat this, tech giants are developing software that identifies failings in AI systems and attempts to rectify them. Perhaps one day this could lead to ‘discrimination warnings’ alongside decisions made by bots, informing customers that a bot has a tendency to turn down loan applications from ethnic minorities or which states the statistical likelihood of recommending a man for a job rather than a woman.

There is no shortage of other examples. It took just 24 hours for Twitter users to train Microsoft’s Tay chatbot to spew racist bile. Some users started tweeting racist and sexist comments to the AI, and the result was a fiasco that led to the memorable Daily Telegraph headline: “Microsoft deletes ‘teen girl’ AI after it became a Hitler-​loving sex robot within 24 hours.”

In the Amazon case, it was at least possible to peer inside the AI to see what lessons it had learned from studying human decisions. Some machine learning algorithms, however, cannot expose their reasoning in the same way. Data comes in. Decisions come out. But the connection between the two cannot be expressed in a way that humans understand.

Writing in Fast Company, Professor Gary Smith, Fletcher Jones professor of economics at Pomona College, said: “In the age of AI and big data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us.”

IBM’s cloud-​based ‘Trust and Transparency’ software attempts to shed some light on the workings of the AI ‘black box’, explaining how AI decisions are made and detecting any bias. For example, it could be used where the software finds that there is bias against a minority in a home loan data set, IBM says. It might identify the source of the bias as specifically involving women of a certain age and ethnic background. A data scientist would then be able to use this knowledge to rectify the bias – for instance, by increasing the amount of data on the group that suffers the discrimination – leading to a more balanced result.

Perhaps one day, we will all need to be equipped with AI bias checkers to ensure computerised decisions that affect us are made fairly. Intelligence, artificial or not, is the best defence against bad decision-​making.

David Benady is a business writer and freelance journalist specialising in technology, marketing and media.