Strategic Research Sponsor of the American Bar Association's Section of International Law

The Future Is Now: Legal Consequences of Electronic Personality for Autonomous Robots

Steven De Schrijver, Astrea

In this article, Steven De Schrijver at Astrea assesses the development of autonomous robots and the legal challenges that they pose. 

Astrea

Recently, the European Parliament made some waves in the legal community by adopting a resolution calling for legislation pertaining to robotics and artificial intelligence (European Parliament resolution). Smart or autonomous robots – or whatever one wishes to call them – are no longer a thing of the future. Technology has evolved to allow machines to learn from interaction with their environment, and make autonomous decisions based on their observations. Self-driving cars, for instance, are a hot topic in this regard, but they are merely the first result of rapidly evolving technological progress. In this way, intelligent robots are being created that can make decisions that have not been pre-approved by their designers or their owners. This new technology also gives rise to new legal issues, as the current legal system was not created with such technology in mind. An answer to these issues, some argue, can be provided by creating the “electronic person”, awarding legal personality to autonomous robots. 

The need for legal reform 

An important issue is with regards to liability, as current liability systems were created for humans specifically. When a robot makes an autonomous decision that causes harm, however, who is liable? Autonomous decisions by robots disturb the chain of causation. At least two solutions to this question can be found if current Belgian liability law were to be applied. A first answer is that the owner is responsible for their property, for “things under custody” as stipulated in article 1384 of the Belgian Civil Code. However, this answer seems inadequate. If the robot can make autonomous decisions, can we really speak of “custody” when the owner has no control over the robot? A second option is to hold the manufacturer responsible, as the manufacturer has both criminal liability for introducing unsafe products on the market, and civil liability for defective products. However, is a product really defective when a robot makes a mistake in a situation the manufacturer could never have foreseen? A third solution that could be suggested is holding the robot accountable, by giving it legal personhood, which we will further discuss below. 

Issues in other fields can also be found. For instance, are contracts made with robots binding? Can an owner be rightfully represented by their robot, when that robot enters into contract based on an autonomous decision? Another example can be found in intellectual property, as it is unclear what happens when a robot makes creations based on its own findings after observing its environment. Who would hold the intellectual property rights? There is also an issue pertaining to connectivity and data protection, not only surrounding the processing of data by robots, but also when it comes to human access to this data and possible dangers of hacking.

If legislative solutions are not found for these problems, there will be a chilling effect on the development of new technology. One solution that has been proposed by the European Parliament, which would solve some of these issues, is awarding a separate legal personality to the robot. 

Why legal personality could be seen as a solution

The European Parliament has suggested the introduction of the legal status of “electronic person” for autonomous robots. In this way, similar to the legal personality of a corporation, robots could be held liable for their actions and enter into legal agreements. 

The creation of a separate status for the electronic person would introduce a clear separate entity in which actions of robots are controlled and managed separately from the owner. This also clarifies what happens when entering into legal relationships with robots, and defines the relationships both between third party and robot, and between robot and owner (shareholder). Electronic persons show similarities with corporations, in the sense that they are both means for their owners (shareholders) to achieve a particular purpose. They exist and are created solely for the benefit of their owner. Their personhood is then also a fiction that merely exists to facilitate this. If one would follow the existing example of legal persons, it is important to emphasise the role the owner plays in the creation of this electronic personality. The robot does not suddenly gain rights and obligations similar to those of humans, but rather the owner behind the robot sets up a legal fiction, of which he is in control, much like a (majority) shareholder. 

In the paragraphs below, an analysis is made of some pertinent legal consequences if the establishment of an electronic person becomes possible, analogous to the current legal provisions applicable to corporations. 

Legal consequences in Belgian law after implementation of legal personhood

Establishment of the electronic personality

The establishment of an electronic personality for robots could be done in a way analogous to incorporation. The owner of the robot then has to register the robot in a central register, and prepare constitutional documents that contain information on the owner and capacity of the electronic person. This could be an effective way to constrict the actions of autonomous robots, and to clarify liability. Especially, a clear demarcation of the capacity of a robot is important, much more so than for corporations, as the robot will be making autonomous decisions that their owner cannot always foresee. 

The (mandatory) registration of robots can also provide to be useful for taxation purposes. The use of robots can be taxed – to perhaps fund any extra infrastructure needed to facilitate their use. Furthermore, robots can also be taxed for social security reasons, replacing the social security cost the owner saves by using robots rather than humans. 

A specific capital can also be granted to, or required of, the electronic person, which it can then use to enter into legal agreements, and to ensure it has the means to fulfil its obligations. This could be in addition to the mandatory insurance and the compensation fund suggested by the European Parliament resolution. If the electronic person becomes party to a contract, transactions can then be made under its name. However, this also means that it will have to be capable of ownership of assets and receivables to fulfil its contractual obligations. Creating a clear demarcation between ownership of the electronic person and the owner of the robot seems an effective way to limit the owner’s liability in addition to clearly defining its capacity, as the owner clearly sets out limits within which the robot can operate, which are also visible to third parties. 

(Non-contractual) liability 

The establishment of the electronic personality would be, according to the European Parliament resolution, paired with the creation of mandatory insurance. This scheme would be based on strict liability, where the insurer would pay whenever a causal link between injury and harmful behaviour of the robot can be established. This insurance scheme could of course also exist without the creation of a legal personality. However, with the electronic person being separate from the owner, some interesting remarks can be made.

A first remark concerns the payment of the insurance premium. According to the report of the European Parliament, it is suggested that such premium be paid by the producer of the robot. However, when it comes to liability, this is not always clearly split between the producer and the owner of the robot. For this reason, both parties could be part of an insurance scheme in order to cover (non-contractual) liability. The premium can also be paid by the electronic person rather than by the owner of the robot. While the starting capital of this electronic person is provided by the owner, once the robot starts making profit or added value, this can then be used to pay the insurance premium, clearly keeping costs and gains related to the robot separate from the owner. 

Additionally, if a clear division were to be made between owner and robot, this would also mean a limited liability for the owner in case of situations that are not covered under insurance. The electronic person would only be required to pay insofar as it is solvent. In practice, this means that the owner of a robot is only liable for the robot to the extent of the capital that the owner has invested in the electronic person. 

In order to ensure protection for third parties, the European Parliament has suggested the setup of an additional compensation fund. This additional compensation fund would, in addition to the mandatory insurance scheme, cover damages caused by the robot that it cannot cover itself. This would ensure that damages incurred by third parties caused by the electronic person would be compensated, and make dealing with an electronic person less risky. 

Contract law 

While upon first glance the use of legal personhood for robots in contract law can seem reminiscent to legal personhood of companies, an important distinction must be made. When a contract is signed between a person and a company, the legal person will be party to the contract. For a company, this will always be through representation by an agent, as it is impossible for a company to enter into a contract without human approval. Furthermore, the contract will have been signed to benefit the company. When it comes to robots, there is a physical robot present making the decision to enter into a contract. There does not need to be human intervention. Contracts that robots will enter into, however – if we presume that robots act to assist their human owner(s) to achieve a specific purpose – will be for the benefit of the owner, and not the robot. In this way, the legal contract the electronic person would be a party to is fundamentally different in nature from the contract between a person and a company. The robot is in fact entering into contract as an “agent” for the person, whereas a company is represented by a human agent. 

When entering into contracts with third parties, it would seem that the electronic person would then be similar to an agent acting in its own name, but for account of its beneficiary. The electronic person would then be party to the contract, and liable to both its owner and the third party. 

When it comes to contractual liability, the separate legal personality of the robot can also prove useful. The third party will know how much exactly the electronic person can be liable for, as it can find the robot’s information in the register – much like it can for a company. Similarly, the owner’s liability is limited. While the contracts the robot enters into are ultimately for their beneficiary, their owner, it is the robot that is responsible for its fulfilment. If the robot then enters into a contract the owner cannot fulfil, the liability will also be limited only to the funds the electronic person possesses. 

***

It is noticeable that many of the benefits provided by a legal personality for robots can also be achieved without this legal personality altogether. For liability, it could suffice to create a mandatory insurance scheme, much like for ownership of a car, linked to a compensation fund. For contracts, it might be sufficient to adapt laws on agency rather than to create contractual relations between robots and humans. Legal personality for companies is interesting as the company can then take responsibility for its actions, which may not be able to be attributed to a single person behind the company. For an autonomous robot, however, it is clear that the one responsible is still not the robot, but the financial person behind it and the manufacturer. However, having an electronic person, and therefore creating a clear separation between robot and owner, could still provide many benefits regarding publicity and limitation of owner liability. As mentioned before, the registration of the electronic person in a public, central register would publish important information such as capacity, which any third party dealing with the electronic person would be able to verify. In this way, the risk of relying on an autonomous robot also decreases, as they become their own separate entity able to rely on a separate set of funds to cover damages. This is helpful both for owners and third parties who interact with robots. 

Of course, criticism has also arisen, with many being very wary about blurring the line between human and robot. The, perhaps not unfounded, concern exists that if robots are given more opportunities to act within society, these intelligent computers might find ways to turn against their owner. Some even argue that the elevation of robots to “person” status would end up with the demotion of humankind’s rank, with humans being unable to keep up with robots’ intelligence and strength. However, an important distinction needs to be made between facilitating the owner’s use of technology and awarding rights to tools. It is not because a tool can be used in a transaction that this tool itself has rights – the rights belong to the owner, who can clearly define the task and purpose of their robots. Robots remain non-conscious, and only exist insofar they can aid their owner. A clear line must be drawn.

For the reasons mentioned above, strict control is necessary over the production process, and rules are also needed for possible uses of robots. The European Parliament has also emphasised this, proposing a code of ethical conduct for robotics engineers and licences for designers and users. Legislation therefore needs to focus on the fact that it is first and foremost the duty of the robot’s manufacturers to truly understand and take into account the possible consequences of the actions of their creations. Laws are to be designed based on that understanding. Inspiration can be drawn from existing models – such as the mandatory insurance fund and legal personality, but in the end cooperation between legislator and engineer is most likely needed to create laws that truly understand how to deal with this new technology, and create a fitting space for its use in society.  

Whether autonomous, intelligent robots should exist, or whether they pose a danger to humanity, is an ethical question. But it is undeniable that technology today is evolving rapidly, and autonomous robots are slowly becoming part of daily life. For instance, Tesla is introducing its self-driving cars, and Starship Enterprises has launched driverless parcel delivery in Estonia. When technology evolves, legislation has to follow, and insofar as possible be prepared for future developments. The existence of a clear legal framework both encourages and controls technologic development, and protects users of new technology. For this reason, electronic personality, or any legal framework pertaining to robots, should not be written off as a daydream inspired by science-fiction – but should, as the European Parliament rightfully proposes, at least be considered seriously on all levels of society.

Back to top

Follow us on LinkedIn

News & Features

Community News

Analysis

Features

Pro Bono

Corporate Counsel

Women in Law

Future Leaders

The UK Bar

Practice Areas

Firms

The Who's Who Legal 100

Awards

Special Reports

Events

Shop

About Us

It is not possible to buy entry into any Who's Who Legal publication

Nominees have been selected based upon comprehensive, independent survey work with both general counsel and private practice lawyers worldwide. Only specialists who have met independent international research criteria are listed.

Copyright © 2018 Law Business Research Ltd. All rights reserved. | http://www.lbresearch.com

87 Lancaster Road, London, W11 1QQ, UK | Tel: +44 20 7908 1180 / Fax: +44 207 229 6910

http://www.whoswholegal.com | editorial@whoswholegal.com

Law Business Research Ltd

87 Lancaster Road, London
W11 1QQ, UK