Article

Mitigating the internal risks of AI

Artificial intelligence (AI) is being deployed at companies across multiple business functions. While the advantages it can bring are countless, it also exposes organisations to new and significant risks. So how can businesses manage the internal risks of AI?

The fascination with AI is enormous. Around two thirds of executives say AI is now extremely or very important to their company and more than eight out of ten expect this will be the case three years on from now, according a recent report from consultancy Cognizant.

But for all the promised benefits of AI, there are also risks. Whether it’s the Microsoft chatbot that learnt to be a misogynistic racist, the Amazon recruiting engine that downgraded female applicants or the US predictor of future criminals that was racially biased, AI can go very wrong, very quickly indeed.

It is a common issue with technology of all kinds that innovation moves faster than the risk and control requirements of the business. The risks to corporate reputation, research and development spend, or even business opportunities from poor performing AI need to be factored into overall decisions about AI implementation. And there’s evidence that companies are wary of moving too far, too fast.

AI’s promise may yet to be fulfilled, but all the more reason to plan for future risks now

“AI is still at a relatively early stage in most organisations,” says Kirstin Gillon, technical manager in the IT faculty at the Institute of Chartered Accountants of England and Wales. “There is still lots of manual intervention.”

Many companies are still only trialling AI, testing it out in small pilot studies. Cognizant’s survey found that although two thirds of respondents knew about an AI project at their company, only 24 per cent knew of one that had been fully implemented. The real challenge of AI comes when you try and scale it up and that’s when some of the risks may become reality.

But there are ways to mitigate the risks, provided you are prepared to take the time and think clearly and logically, rather than being panicked into AI use for fear of missing out. One key to managing the risks is ensuring there are sufficient skills in the company to handle the technology.

“Skills are a big inhibitor [to implementation],” says Ms Gillon. “It’s too easy to go out and buy, but without the knowledge you just become reliant on the supplier.”

This raises a further risk, according to the Trust in Artificial Intelligence report from consultants KPMG. It says: “How much more difficult it will be to exit from a provider when it not only runs infrastructure or hosts applications, but hosts AI which is learning and changing over time; who owns the intellectual property when a third-​party AI system has learnt from your data?”

Mitigating the complications of a long-​running AI programme means a clear understanding of ownership, whether of the system, the learning or the data.

AI is, of course, only as good as the data from which it learns and many organisations are still struggling to organise their data in a form that is good enough for high-​quality learning by the machines. Data input could well be one of the biggest challenges faced in the implementation of AI, partly because businesses are constantly changing, so there is no such thing as permanently clean data.

How do you ensure the unique culture of your company is reflected in an AI solution, particularly when that culture may be subject to change? Is it possible for AI to unlearn as well as to learn? Mitigating the risks here needs good governance and strong central power, with a laser-​like focus on producing and maintaining the necessary high-​quality data. There needs to be a governance structure that creates trust in AI.

More importantly, AI needs human oversight and supervision. Not only are humans needed to set up the parameters of the AI in the first place, but there needs to be the capacity for people to step in and make corrections in case of an unexpected and unpredictable unknown.

“Where the humans do not have the capacity to intervene in time because of a lack of retained expertise, or because of lacking automated safety stops to prevent things evolving too fast for humans to cope, the outcome could be disastrous,” according to KPMG’s Trust in Artificial Intelligence report.

The older the AI, the more complex and the harder to change, and the higher the risk that a bias or misinformation will slip through the net. Processes relying on AI need to be compliant with corporate policy and, if that changes or the application is subject to learning outside the organisation, it can be hard to back track.

These new AI risks demand new approaches to control, says KPMG. Key for auditors is the ability to validate AI decisions, which means being able to understand why a machine has made the decision or taken the action it has. And so we circle back to skills, not just in terms of the technology, but also understanding the corporate strategy.

The need to secure executive talent led the list of challenges cited by respondents in Cognizant’s survey, though budgets and the interaction between different AI applications came close behind.

Despite all the risks, however, the game is likely to be worth the effort. AI’s promise may yet to be fulfilled, but all the more reason to plan for future risks now.