Blame it on the Robot?

Google-owned artificial intelligence company DeepMind recently succeeded at designing a program capable of defeating some of the world’s best players of Chinese strategy game Go.  While this may not appear groundbreaking, the real triumph comes from the fact that while the programmers laid the ground rules, it was the program itself, AlphaGo, that taught itself to play. In traditional programming, a human supplies step-by-step instructions for a computer to follow.  In contrast, in a machine learning approach, the computer learns from provided information and makes predictions or takes steps based on what it has learned rather than on explicit instructions provided by the programmer.  DeepMind programmers relied on reinforcement learning, a subset of machine learning that allows AI models to learn from past experiences.  This training method allowed AlphaGo to learn the success of various moves by playing against a second version of itself.  After three days of playing against itself, AlphaGo was able to beat DeepMind’s original, human-trained program by 100 games to zero. DeepMind created an AI that could learn and make independent decisions in order to maximize its success.  In this application, AlphaGo was working within the confines of a board game with a clearly defined goal and clearly defined rules.  But what happens when such a system is implemented in situations not quite so lighthearted and far more complicated, say autonomous vehicles.  Companies and scholars are already experimenting with reinforcement learning to train virtual autonomous vehicle systems.  As AI develops new abilities to learn, make decisions, act independently, and go beyond its initial programmed structure, its unpredictability, and in turn the potential for damages, also...