DEEP BLUE’S VICTORY

Murray Campbell, a Distinguished Research Staff Member at IBM, recently discussed the legacy and impact of the fateful 1997 chess series in which IBM’s Deep Blue beat Gary Kasparov — the world number one chess player for 225 out of 228 months between 1986 and 2005.

Campbell was part of the portentous encounter himself. He was a member of the team that helped build Deep Blue’s university progenitor, Deep Thought, which was the first program to beat a grandmaster in a professional tournament. When IBM took notice, Campbell and his colleagues were hired by them to build Deep Blue. The system they eventually built was a combination of “general-purpose supercomputer processors combined with […] chess accelerator chips.”

Although computers had beaten humans in games before — such as BKG 9.8’s victory over Luigi Villa at backgammon in 1979 and Chinook’s domination of Don Lafferty in checkers in 1994 — Deep Blue’s victory was considered so auspicious because it won at chess.

Bobby Fischer, who is enshrined in the chess hall of fame along with Kasparov, said that it takes “a strong memory, concentration, imagination, and a strong will” — quintessential qualities of human intelligence — to make a good chess player. If a computer could beat the best chess player in the world, it put a question mark not only on human intelligence, but the exceptionality of humanity itself.

WHAT HAPPENED AFTER DEEP BLUE?

David J. Staley wrote, concerning the match, that “Chess represents a domain of human skill that is simple enough to model yet complex enough to reflect deep levels of cognition.” Therefore, Deep Blue’s triumph marked the first true trophy for artificial intelligence because it beat the man who many consider to be the best player of all time at a game that has been regarded as a pinnacle of human intelligence since antiquity.

Monty Newborn, Emeritus Professor of McGill University’s School of Computer Science, makes an apt analogy in his book Kasparov versus Deep Blue: Computer Chess Comes of Age. He states that “many advances in the auto world were first tried on racing models and then after refinement incorporated into commercial vehicles. This may be the pattern in the computer field, too, where techniques used by computers to play chess are on the cutting edge of developments in complex problem-solving.”

Since Deep Blue established a benchmark, says Campbell, “machines have improved in processing speed and memory and so on” — resulting in them adding more and more gaming jewels to their virtual crown. Additionally, machine learningalgorithms have access to a lot more data than they did in the past.

In recent years, the most notable victories have been AlphaGo’s win over five of the best Go (a game arguably more complicated than chess) players simultaneously, and Libratus’s domination over four of the world’s top poker players. In this latter encounter, Dong Kim, one of the contestants, told Wired, “I felt like I was playing against someone who was cheating, like it could see my cards. I’m not accusing it of cheating. It was just that good.”

These developments may be reflections of AI’s incremental development towards becoming more human, as these games are similar to the complexities and solutions of life itself. However, Kasparov’s point from 2010 that modern technology is a “culture of optimization” — that “it is derivative, incremental, profit margin-forced, consumer-friendly technology — not the kind that pushes the whole world forward economically” — applies to these victories.

Perhaps AI’s real challenge, and the next paradigm shift, is for it to defeat a game we have developed in modern times — like StarCraft IIOriol Vinyals, a DeepMind researcher and former top-ranked StarCraft player, told The Verge that the game is so complex and multifaceted that “the skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”

Even though you can play against AI when you play StarCraft, the AI that Vinyals is working on would be modeled after the way humans play the game — along with using the same rules we do. AIs are able to play simple video games (think Atari-level), but nothing as complex as StarCraft yet. The researchers don’t know when an AI will be created that is able to best a top-ranked player, but the day will come. This AI will have been taught to make decisions as a human would when playing a game with far more layers and complexities than any game attempted by AI before. Maybe it will be able to teach players the perfect strategy to defeating a Zerg rush.