“Everybody knew that AI didn’t work,” explained Sergey Brin, speaking at the World Economic Forum in 2017 and explaining why he personally did not see artificial intelligence emerging. “People tried it, they tried neural nets, and none of it worked.” Within a few years of assuming that negative perspective, Brin conceded that AI now “touches every single one of our main projects [at Google Brain]….What can these things do?” Brin asked rhetorically. “We don’t really know the limits.” Then, just two years after Brin suggested an open field for AI, 42 countries, including the United States, adopted policy guidelines established by the Organization for Economic Co-operation and Development (OECD) “agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.” Also, the U.S. Pentagon announced a set of ethical principles to guide development of AI-based weapons. At about the same time, MIT created an entirely new area of research, Machine Behavior, to study how algorithms and humans interact “in the wild.” And finally, the Europe Union announced extensive proposals on how it would regulate the development and use of AI. In little more than two years, AI had gone from something that seemed to have “no limits” to something that was moving ahead at such a speed and getting applied in so many arenas that many outsiders thought the technology needed to be monitored and regulated. In the two years since Brin’s World Economic confessional, AI had sprung from its early experiments to an economic force, adding roughly $2 trillion to the world’s economy, according to Pricewaterhouse Coopers. In a survey of 2,000 corporations worldwide, 75 percent said they had incorporated AI into their operations or were implementing a pilot program to study its effectiveness. Moreover, AI is fundamentally changing the way research in medicine and several other sciences is being undertaken, forging what could be AI’s most critical application going forward.