' Blame it on the Robot? | MTTLR

Blame it on the Robot?

Google-owned artificial intelligence company DeepMind recently succeeded at designing a program capable of defeating some of the world’s best players of Chinese strategy game Go.  While this may not appear groundbreaking, the real triumph comes from the fact that while the programmers laid the ground rules, it was the program itself, AlphaGo, that taught itself to play.

In traditional programming, a human supplies step-by-step instructions for a computer to follow.  In contrast, in a machine learning approach, the computer learns from provided information and makes predictions or takes steps based on what it has learned rather than on explicit instructions provided by the programmer.  DeepMind programmers relied on reinforcement learning, a subset of machine learning that allows AI models to learn from past experiences.  This training method allowed AlphaGo to learn the success of various moves by playing against a second version of itself.  After three days of playing against itself, AlphaGo was able to beat DeepMind’s original, human-trained program by 100 games to zero.

DeepMind created an AI that could learn and make independent decisions in order to maximize its success.  In this application, AlphaGo was working within the confines of a board game with a clearly defined goal and clearly defined rules.  But what happens when such a system is implemented in situations not quite so lighthearted and far more complicated, say autonomous vehicles.  Companies and scholars are already experimenting with reinforcement learning to train virtual autonomous vehicle systems.  As AI develops new abilities to learn, make decisions, act independently, and go beyond its initial programmed structure, its unpredictability, and in turn the potential for damages, also increases.

The question then arises, just who is liable if something goes wrong? The owner or user? The developer? The “robot” itself? Law surrounding robotics thus far has tended to only find liability where the developer was negligent or could foresee harm.  The addition of reinforcement learning to the mix makes legal issues far more complicated, given that no human has explicitly programmed the AI system to make a given choice.  A particular method was simply chosen by the AI as being the optimal solution to whatever problem it faced.  While we can imagine that limits can be set in order to prevent AI from choosing a method we would find problematic, it is unlikely that humans will be able to foresee the solutions the AI system will come up with.  Even within the confines of Go, AlphaGo didn’t play in a way that humans could recognize as any cognizable strategy.  As AI simultaneously expands into new applications and develops increased independence of decision-making, the legal system will need to determine how to ascribe liability in the case of AI action.

Submit a Comment

Your email address will not be published. Required fields are marked *