Breaking News

Big Brains: New Books on Artificial Intelligence – The Wall Street Journal

Ken Jennings at a press conference for his ‘Jeopardy!’ competition with IBM’s Watson computer in 2011.

Photo:

Getty Images

By
David A. Shaywitz

Updated May 21, 2021 10:53 am ET

‘I, for one, welcome our new computer overlords,” a wry Ken Jennings wrote in 2011 after he was decisively defeated in “Jeopardy!” by IBM’s Watson computer and the artificial intelligence that powered it. A decade later, Mr. Jennings’s sentiments seem prescient. Alexa and Siri inhabit our homes and our devices. Digital transformation has overtaken our workplaces. AI-driven recommendation engines help determine the movies we watch, the products we buy and the information we receive, influencing our preferences and inflaming our politics.

Yet the concept of intelligent computers, advanced by the British mathematician Alan Turing in 1950, isn’t new; nor is the term “artificial intelligence,” first used at a research conference in 1956. What has changed is AI’s power and reach, especially with the arrival of what is called “deep learning”—the capacity for powerful pattern recognition, with seemingly little human instruction required.

In “Genius Makers” (Dutton, 370 pages, $28), New York Times technology reporter Cade Metz tells the compelling story of the scientists who developed deep learning, a small group of researchers “who nurtured an idea for decades, often in the face of unfettered skepticism before it suddenly came of age.”

At the epicenter of the effort is Geoff Hinton, scion of a long line of prominent British academics. Captivated by the idea that computers could mimic the brain, Mr. Hinton followed his passion from Edinburgh to Pittsburgh to the University of Toronto, where he and his students, in the early 2010s, showed that a mathematical system “modeled on the web of neurons in the brain” could identify common objects “with an accuracy that had previously seemed impossible.” The feat was achievable so long as the computer could first learn from vast troves of data. The approach rapidly moved from the detection of cats in YouTube videos to intuitive digital assistants and software designed to flag credit-card fraud.

Mr. Hinton and his students were soon working for Google, while colleagues were snatched up by other tech powerhouses like Facebook in California and Baidu in China, companies “caught up in the same global arms race” for AI technology and expertise. So intense was the drive for talent that one Microsoft executive compared the cost of acquiring an AI researcher to the cost of acquiring an NFL quarterback.

While some researchers, such as Mr. Hinton’s former student Alex Krizhevsky, described their work in measured terms, most were expansive. Mr. Metz notes that there is a long tradition of AI researchers and tech leaders promising “lifelike technology that was nowhere close to actually working.” Former Google chairman Eric Schmidt stands out for his “haughty,” “patronizing” manner and habit of addressing audiences “as if he knew more than anyone else in the room, about both the past and the future.”

This trait of living as if the future had already arrived, observes Mr. Metz, seems especially common among the Silicon Valley elite, who recognize that “ideas might fail. Predictions might not be met. But the next idea wouldn’t succeed unless they, and everybody around them, believed that it could.” To sell the future effectively, Mr. Metz’s evangelists seem to suggest, you need first to inhabit it yourself.

Such hyperbole irks British AI researcher Michael Wooldridge, who aims, in “A Brief History of Artificial Intelligence” (Flatiron, 262 pages, $27.99), to provide a level-headed introduction to the evolution of the science. He dismisses what he calls the grand dream of AI—“a computer that has the full range of intellectual capabilities that a person has”—as “nothing more than speculation.” He prefers to focus on what AI really tries to do: getting computers to perform specific tasks. We’re guided through AI’s tumultuous history as it careers wildly between periods of great hope and years of intellectual despair known as “AI winters.”

The recent progress in deep learning opened up all sorts of possibilities and applications, Mr. Wooldridge says. “Everyone with data and a problem to solve started to ask whether deep learning might help them—and in many cases, the answers proved to be ‘yes.’ ” But overestimating the power of this technology, he reminds us, can be dangerous. He cites Tesla’s curiously named Autopilot—a technology that allows AI to drive a car under human supervision. It has created a “mismatch between driver expectations and the reality of what the system can do,” potentially endangering operators who place excessive faith in the still-evolving software.

For computer scientist and entrepreneur Erik Larson, the fundamental error we make when thinking about AI is failing to recognize that “human and machine intelligence are radically different.” He notes that success at achieving narrow computer applications—say, playing chess—gets us “not one step closer to general intelligence.” The sort of intelligence we display every day, he reminds us in “The Myth of Artificial Intelligence” (Harvard/Belknap, 312 pages, $29.95), is not “an algorithm running in our heads.” Rather our minds call on “the entire cultural, historical, and social context within which we think and act in the world.” It’s critical, Mr. Larson argues, not to replace complex discussions “about individuals and societies” with tidy technological narratives and one-dimensional abstractions.

More broadly, Mr. Larson worries that we’re making two mistakes at once, defining human intelligence down while overestimating what AI is likely to achieve. IBM’s Watson computer is a case in point. We trumpet the AI it uses while overlooking the role of the engineering team’s “careful and insightful game analysis.” In the case of the “Jeopardy!” contest, part of the team’s analysis led to an “exploitable shortcut”: 95% of the answers to the show’s questions are Wikipedia titles, dramatically constraining the universe of possible responses through which Watson had to sort.

By reinterpreting human intelligence to fit a computational definition, we risk abandoning a “richer understanding of the mind,” Mr. Larson says. He invokes tech writer Jaron Lanier’s lament that “a new generation has come of age with a reduced expectation of what a person can be, and of who each person might become.” Another concern is learned passivity: our tendency to assume that AI will solve problems and our failure, as a result, to cultivate human ingenuity. “Computers don’t have insights,” Mr. Larson reminds us. “People do.”

Kate Crawford, a communications researcher, is also worried about the role of AI systems. She sees them as “expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control.” Convinced that these forces have promulgated a false narrative, she seeks, in “Atlas of AI” (Yale, 327 pages, $28), to adjust the story.

Ms. Crawford argues passionately that while AI is presented as disembodied, objective and inevitable, it is material, biased and subject to our own outlooks and ideologies. The ecosystem of AI, she says, “relies on many kinds of extraction: from harvesting the data made from our daily activities and expressions, to depleting natural resources, and to exploiting labor around the globe.” She describes “ghost work”—the labor of anonymous, low-paid employees who do the repetitive tasks that “backstop claims of AI magic,” such as labeling the images that are used to teach algorithms to recognize and distinguish objects.

The data sets by which artificial intelligence is “trained,” Ms. Crawford says, are shot through with bias and error. One commonly used criminal database, she notes, included the names of 42 infants—including 28 who allegedly admitted to “being gang members.” More broadly, she laments the “collect-it-all mentality”—the idea that “everything is data and is there for the taking.” We measure what we can, she notes, not necessarily what we should. She counsels us to focus less on tech founders and investors and more on the “lived experiences of those who are disempowered, discriminated against, and harmed by AI systems.”

in “Futureproof” (Random House, 217 pages, $27), New York Times technology writer Kevin Roose is focused on what individuals—“people like you and me, with jobs and families and communities to worry about”—can do about the ascent of AI and the threat of automation. While sharing many of Ms. Crawford’s concerns, he also envisions AI as a force for good, helping to remediate poverty, reverse climate change and reduce the burden of disease.

The threat of AI to our jobs, Mr. Roose perceptively observes, isn’t that we’ll show up to work one day like TV’s George Jetson and find Uniblab the robot sitting at our desk. The danger is more nuanced—desirable innovations may cause slow-motion shifts in staffing. An airline’s deployment of AI to improve aircraft maintenance might help the planes last longer, reducing demand for replacement jets—and for the workers who would otherwise manufacture them. Similarly, software to improve the loading of trucks could reduce the number of trucks needed for the same amount of freight and thus trim the number of drivers. “We may want to stop worrying about killer droids and kamikaze drones,” he writes, “and start worrying about the mundane, mediocre apps and services that allow companies to process payroll 20% more efficiently, or determine benefits eligibility with fewer human caseworkers.” Another concern is AI at the level of middle management—algorithms that methodically supervise tasks, monitor quality and evaluate performance, obviating the role (and expense) of human judgment.

We should lean into our humanity, Mr. Roose says, “leaving our own, distinctly human mark on the things we’re creating.” Job security, he suggests, depends less on what we do and more on how we do it.

Consider the example of the electronics retailer Best Buy, a company that a decade ago seemed to be circling the drain, unable to compete with Amazon on price. In 2012, Mr. Roose writes, the company hired a new CEO, Hubert Joly, who came up with a new strategy: compete on service. Best Buy started to focus on providing “deeply human experiences that e-commerce retailers . . . couldn’t match.” A home-adviser program, launched in 2017, was “an immediate hit.” Today the stock is trading near an all-time high. The key insight, Mr. Joly told the author, was to recognize that the “business we’re in is not simply selling products—it’s connecting human needs with technology solutions. So, our focus is on these human needs.” He may be onto something.

—Dr. Shaywitz, a physician-scientist, is a digital health and connected fitness adviser and a lecturer at Harvard Medical School.

Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Source: https://www.wsj.com/articles/big-brains-new-books-on-artificial-intelligence-11621607063