When is a robot a person?

When is a robot a person?

By David Benady

11th October 2018

As legislators debate the legal status of AI, some have suggested the creation of ‘electronic persons’.

Where does the blame lie when artificial intelligence goes wrong? Its creator? Its owner? Or could the AI itself have legal liability?

Legislators are trying to resolve the difficult question of the legal status of AI as they seek to create workable laws that won’t hinder the benefits the technology can bring. One suggestion that surfaced in a European Parliament report in 2017 was the idea of giving sophisticated self‐​learning robots a legal status as ‘electronic persons’ that would make good any damage they cause. This would put them on a par with corporations, which are also ‘persons’ in the UK legal system.

But there is strong opposition to the idea. A group of 156 AI experts from 14 countries, including chief executives and professors of technology and law, have written to the European Commission to oppose the idea of giving robots this status. Their argument? Making the AI responsible might let its creators off the hook.

The issue is highly complex. A smart robot that uses machine learning continuously alters its own functions and abilities over time. For instance, a self‐​driving car is programmed to learn how to navigate city streets, using input from map providers, traffic lights and sensors in other vehicles. So how much time must pass before a manufacturer can claim that a car that caused an accident is different from the one that left the factory? Could a robot, like a human, reach an ‘age of majority,’ where it has responsibility for its own actions?

The advantage of bestowing personhood on a robot or AI and making it responsible for any mistakes or errors arising from its operations is that it would allow insurers to create ‘strict liability’ policies. These make a person responsible for the consequences of an activity without having to show fault or criminal intent. Thus insurers would pay out in the event of an incident, without the need to prove intent or negligence.

“What the EU Parliament was talking about was registering smart robots such as autonomous vehicles – so if it makes a wrong turn or causes harm, people can sue the machine because it is protected by insurance. There would be a financial product that people can claim against,” says Chris Holder, a specialist IT outsourcing lawyer at legal firm Bristow. Once a robot is given an ‘electronic personality’, it would be registered and assume liability. Still, Mr Holder accepts that legal personhood for a robot could lead to absurd outcomes.

While supporters of the personhood idea have sarcastically rebutted opponents by saying it doesn’t mean that robots will be able to get married, other peculiar outcomes could ensue. For instance, robots and AIs are increasingly involved in creating intellectual property, such as logos, illustrations and pieces of music, as well as the development of inventions. Patents are recognised as belonging to persons, such as individuals or corporations. With legal personhood, AIs could then ‘own’ their patents and intellectual property in the same way as corporations. Would that mean robots would be able to sue those who use their patents without permission?

The European Parliament floated the electronic personality idea as a possible option to address the legal liability of robots and AI. It is not a definite plan, and it is clear that lawmakers will continue to struggle to create a legal status for self‐​learning software. Ultimately, each type of AI may need its own legal definition rather than creating a single overall version. This is going to create some interesting court cases in years to come.

David Benady is a business writer and freelance journalist specialising in technology, marketing and media.