It doesn’t take a lot of imagination to see where the future of robotics is heading. In fact, with modern robots, it doesn’t take any imagination – that future is already here. Robotic technology, as well as artificial intelligence, is advancing rapidly, and robots are increasingly playing a larger role in society. For instance, self-driving cars are beginning to ferry us from point A to point B, and personal assistant robots can even give us hugs when we’re feeling down. As this is happening, the question of who is responsible for their actions is becoming an ever more prominent and important discussion to have.
It’s worth noting that although the actions of today’s most advanced robots may appear to be emotionally based, current technology allows just pre-programmed responses from the robot’s software design. Although robots are not able to make personal choices yet, there should still be a careful consideration of who is at fault when an autonomous or semi-autonomous being harms someone or causes damage.
For example, a self-driving car that is programmed to avoid a collision may be in a situation where a collision is unavoidable. How does it choose to act, and who bears responsibility? In this instance, it is important to understand “software agency“; a term that refers to software in a robot that acts on behalf of the user. The self-driving car may have crashed due to an unavoidable situation, but it was still directed by the user to perform such actions. Currently, “software agents” are described as “tools” defined by the user, as their actions can be traced to programmer commands, a.k.a. “computer statements”. As the ability of robots to make independent decisions grows, the question of whether software agents constitute a computer statement is becoming increasingly urgent and needs to be addressed. The progression from a robot acting as a “software agent” to an “intelligent agent”; when it acts on algorithms via inputted information rather than directly instructions, is the foundation of artificial intelligence (A.I.).
As A.I. becomes more prevalent, the establishment of legal rights for robots will become necessary. If these protections go as far as granting electronic “personhood”, similar to corporate personhood, or even close to equal rights, will depend on how advanced a robot can be in forming relationships. This protection is not only for robots, but also for humans. Just as the owner of a dog is ultimately responsible for its actions, rather than the dog’s breeder, so may the owner of the robot bear the responsibility of their robot’s actions, and not the software engineer. However, because the engineer won’t be culpable, neither will they be in ownership of what the robot creates.
Since robots are able to learn new habits to improve their functionality, their intellectual property (I.P.) will soon need protection too. For example, a robot that is tasked with picking up an object may fail at first. However, after each attempt, it can automatically save and adapt based on what it “learned,” making each successive attempt more successful. This means that although a robot can physically be replaced, the learned behaviors may be entirely wiped out should both the hard drive and off-site data storage be damaged. Robotic I.P. will soon become a very valuable asset to corporations, who will seek to efficiently employ robots in whatever capacity possible so they can increase the bottom line.
This may be the first area of robot rights that comes into legislation. An employer who is seeking the greatest means of efficiency may very well choose a robot to perform the task. With human backing on the side of robots, there will inevitably be legislation that protects the rights of the robots to perform the task under the umbrella of a competition clause. As the laws begin to take form, the concept of equal employment could be the first movement to resemble “human rights” for robots.
In the end, the drafting of guidelines for the rules, treatment and expectations of robots and artificial intelligences must and will happen. Some organisations like the European Robotics Coordination Action have already made some suggestions for a green paper on legal issues in robotics. Not only will the proliferation of AI and robots demand regulation to keep up, but as the lines between human and artificial sentience begin to blur, humans will demand protections for the robots that assist them in everything from painting the living room to exponentially reducing traffic-related deaths. Going in this direction, an open letter signed by hundreds of AI researchers is circulating in order to make sure we keep artificial intelligence good for humanity. The signatories include researchers from Oxford, Cambridge, MIT, Harvard, Google, Amazon, IBM and Even Elon Musk who has donated 10M$ of his own fortune to the cause. With thoughtful consideration, these laws will pave the way for productive and positive human coexistence with robots and AI.