If a robot hurts or kills someone, is the robot responsible?

If a robot hurts or kills someone, is the robot responsible?

As robotics and AI become an increasingly potent force in society, previously abstract questions about how we should legislate them now need concrete answers.

As self-driving vehicles take to the roads, and organisations and governments continue to invest in collaborative robots to work autonomously alongside humans, we need to decide who should be responsible for the decisions that robots make.

An attempt to address this conundrum was made last year by the European Parliament, which passed a resolution suggesting robots be granted ‘legal status’. But recently, members of the European Council (responsible for defining the EU’s overall political agenda) and others have penned an open letter on the subject.

It strongly cautioned against granting robots legal rights, and suggested that proponents of legal status for robots might have ulterior motives for laying responsibility at the feet of machines, rather than their manufacturers.

What is the resolution?

The resolution was passed last year, when the European Parliament voted to grant legal status to ‘electronic persons’. Drafted by MEP and Vice-Chair of the European Parliament’s legal affairs committee Mady Delvaux, the resolution aimed to create a set of unified laws to prepare European countries for the entry of AI and robotics in everyday activities, and address concerns that autonomous machines might cause harm to their human counterparts.

Lawmakers called for legal recognition of robots as a way to hold them accountable for damage they might cause, particularly to clarify liability laws surrounding self-driving cars.

“At least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart, autonomous decisions or otherwise interact with third parties independently,” the resolution stated.

Proponents of the resolution have been quick to clarify the legal status of robots would be similar to laws that give businesses legal rights of individuals, allowing them to sign contracts or be sued, but would not give them human rights.

Science fiction versus fact

The open letter, which was been signed by AI thought leaders, experts and CEOs around the world, raised a number of concerns its signatories have about the resolution.

Firstly, the resolution speculates that robots might have the autonomy to make complex choices and even make mistakes – an assumption that drastically overestimates their abilities.

“From a technical perspective, this statement offers many biases based on an overvaluation of the actual capabilities of even the most advanced robots,” the letter stated, including “a superficial understanding of unpredictability and self-learning capacities, and a ‘robot’ perception distorted by science-fiction and a few recent sensational press announcements.”

Indeed, the European Parliament’s resolution begins with references to Mary Shelley’s Frankenstein, the myth of Pygmalion and other “androids with human features”.

Proponents of the Council’s open letter have also pointed to Sophia, the humanoid robot who was granted citizenship from Saudi Arabia.

Noel Sharkey, co-founder of the Foundation for Responsible Robotics and one of the letter’s signatories, expressed his concerns about the impact of ‘show robots’ like Sophia on law and policy makers.

“It’s very dangerous for lawmakers. They see this and they believe it, because they’re not engineers and there is no reason not to believe it,” he said in an interview with Politico.

An out for manufacturers?

Signatories of the letter suggested granting legal status to robots would ultimately serve manufacturers looking to absolve themselves from blame in the event of an accident.

“By adopting legal personhood, we are going to erase the responsibility of manufacturers,” said Nathalie Navejans, a French law professor at the Université d’Artois and one of the letter’s architects.

However, while the Council opposes the EU’s proposal, the open letter advocates for  “unified, innovative and reliable laws” to regulate AI and robotics, especially as more semi-autonomous and autonomous robots are likely to hit the market in coming years.

Do you think robots in Australia should be given legal rights? Or would that be a huge mistake? Let us know in the comments, or join us for a panel discussion on the topic at the Australian Engineering Conference in Sydney.  

Exit mobile version