Insuring AI Writing the robot rulebook

Insuring AI Writing the robot rulebook

By David Benady

12th October 2018

Insurers are grappling with how to assign liability in accidents and incidents involving robots.

Robot owners beware: insuring automatons against accidents or malfunctions is becoming essential, but insurers have yet to establish a clear set of rules on how to assign liabilities and make payments.

Before a robot gets to work – as an autonomous vehicle, manufacturing aid or virtual assistant – owners, developers and licensors must prepare for the worst. Most obviously, they might need to pay out significant sums in the event of an accident. The creators of a robotic arm that crushed a Michigan factory worker to death in 2017 are being sued for negligence, along with the factory owners and others involved. Depending on how blame is apportioned, the case could involve substantial payouts.

But how will insurers write a policy to cover all eventualities? And how will liability be assigned?

There is a mounting caseload of injuries and deaths caused by robots that demand new ways of working out liabilities and pricing premiums. Take surgical robots. Used in hospitals worldwide, they shadow the movements of human surgeons and compensate for the tremors and jitters of the specialists as they carry out their sensitive work. But who is responsible for a slip of the knife? Should the manufacturer of the robot be held accountable? Perhaps the fault lies with the surgeon who led the operation or the hospital where it was carried out? Unless the fault can be conclusively tied to the software, rather than the movements of the surgeon, assigning liability is very difficult.

Insurers are grappling with how they will underwrite insurance policies and assess the risks of accidents related to robotics. Writing in The Society for Computers and Law Journal, John Buyers – a commercial outsourcing and information technology specialist at law firm Osborne Clark – said that while the existing liability frameworks deal comfortably with traceable defects (machine decisions that can be linked to defective programming or incorrect operations), “they begin to fail, however, where defects are inexplicable or cannot be traced back to human error.”

A crucial question is whether the AI or robot is defined as a product or a service. Under the US system, for example, a robot would be subject to strict liability in the case of product fault: an insurance policy will pay out whether there is blame or not. But if the AI system is defined as a service, the insurer could argue for a payment based on negligence, which tends to be less costly. And proving negligence is complex: the aggrieved party would have to show that the AI failed to perform as a ‘reasonable person’ or a ‘reasonable computer’, which would require knowledge of the processes behind the creation of the AI.

To defend against a claim of negligence, the AI developer would need to build in a system of analytics to explain how decisions were made, something like an aircraft’s black box recorder. Still, many believe that being defined as a service, rather than a product, would be better for the industry: product fault liability could deter the development of AI and robots, due to fears over the potential for larger payments.

One of the first pieces of legislation to deal with AI insurance was recently published in the UK. The Automated and Electric Vehicles Act, introduced in July, attempts to set the boundaries for how self‐​driving vehicles should be insured. The AEV Act states that insurers should deal with all claims where the autonomous vehicle is in full control. However, they have the right to limit liability when the policyholder has failed to keep the vehicle’s software updated or if modifications have been made to the software without permission from the manufacturer.

The biggest change from existing policies is that the vehicle itself must be insured, rather than the driver: a reflection of the autonomous nature of the platform. But this is a highly complex area. Autonomous vehicles are still used in cooperation with humans, switching between fully autonomous driving, human‐​assisted driving and full human control. It will be many years before self‐​driving vehicles are fully autonomous, meaning that both the vehicle and the driver will require insurance.

To add to the complexity, there is an interim period of a few seconds during the handover from human to computer, and vice versa. Who is responsible for an accident occurring in these moments? An autonomous vehicle might also be programmed to take actions that reduce loss of life, opting to crash into a wall and sacrifice the driver rather than kill three pedestrians. This will need to be addressed in any vehicle insurance policy.

Robot insurance will develop through trial and error over the coming years. Every crash of an autonomous vehicle, slip of a surgeon’s scalpel or errant movement of a robotic arm will guide insurers and governments as they establish the ground rules.

The manufacturers themselves could drive much of this process. Tesla, the electric car company whose vehicles are perhaps the closest to self‐​driving on today’s roads, has launched its own insurance scheme, arguing that traditional insurers overestimate the risks posed by its vehicles. Elon Musk, the company’s chief executive, declared earlier in 2017:

“If we find that the insurance providers are not matching the insurance proportionate to the risk of the car, then, if we need to, we will in‐​source it.”

In 1983, a Soviet early‐​warning system warned that five US nuclear missiles had been launched at the USSR. The Soviet officer in charge decided not to retaliate, avoiding a nuclear war. He made the very human judgement that “when people start a war, they don’t start it with only five missiles.” Thankfully, the officer was correct: the satellite had misread a reflection of the sun from clouds. But an AI system may well have lacked that human judgement and triggered a nuclear war, according to Professor John Kingston, a specialist in knowledge‐​based AI at the University of Brighton:

“If an AI system had been in charge of the Soviet missile launch controls that day, it may well have failed to identify any problem with the satellite, and launched the missiles,” he wrote in a 2016 research paper on AI liability. “It would then have been legally liable for the destruction that followed, although it is unclear whether there would have been any lawyers left to prosecute the case.”

David Benady is a business writer and freelance journalist specialising in technology, marketing and media.