Breaking News

Here’s why Apple believes it’s an AI leader—and why it says critics have it all wrong – Ars Technica

We’ve known for a while now that machine learning was important to the palm rejection technology on the iPad when using a Pencil.

Apple

Machine learning drove Apple’s development of Apple Pencil handwriting functionality in iPadOS 14.

Apple

It’s also used for the handwashing feature coming to the Apple Watch later this year.

Apple

Another application: translation.

Apple

AI is behind the automatic positioning of home screen widgets.

Apple

Machine learning (ML) and artificial intelligence (AI) now permeate nearly every feature on the iPhone, but Apple hasn’t been touting these technologies like some of its competitors have. I wanted to understand more about Apple’s approach , so I spent an hour talking with two Apple executives about the company’s strategy—and the privacy implications of all the new features based on AI and ML.
Historically, Apple has not had a public reputation for leading in this area. That’s partially because people associate AI with digital assistants, and reviewers frequently call Siri less useful than Google Assistant or Amazon Alexa. And with ML, many tech enthusiasts say that more data means better models—but Apple is not known for data collection in the same way as, say, Google.
Despite this, Apple has included dedicated hardware for machine learning tasks in most of the devices it ships. Machine intelligence-driven functionality increasingly dominates the keynotes where Apple executives take the stage to introduce new features for iPhones, iPads, or the Apple Watch. The introduction of Macs with Apple silicon later this year will bring many of the same machine intelligence developments to the company’s laptops and desktops, too.
In the wake of the Apple Silicon announcement, I spoke at length with John Giannandrea, Apple’s Senior Vice President for Machine Learning and AI Strategy, as well as with Bob Borchers, VP of Product Marketing. They described Apple’s AI philosophy, explained how machine learning drives certain features, and argued passionately for Apple’s on-device AI/ML strategy.
Table of Contents

What is Apple’s AI strategy?

How does Apple use machine learning today?

Why do it on the device?

Macs with Apple Silicon

What about privacy?

Inside the black box

What is Apple’s AI strategy?
Both Giannandrea and Borchers joined Apple in the past couple of years; each previously worked at Google. Borchers actually rejoined Apple after time away; he was a senior director of marketing for the iPhone until 2009. And Giannandrea’s defection from Google to Apple in 2018 was widely reported; he had been Google’s head of AI and search.
Google and Apple are quite different companies. Google has a reputation for participating in, and in some cases leading, the AI research community, whereas Apple used to do most of its work behind closed doors. That has changed in recent years, as machine learning powers numerous features in Apple’s devices and Apple has increased its engagement with the AI community.
“When I joined Apple, I was already an iPad user, and I loved the Pencil,” Giannandrea (who goes by “J.G.” to colleagues) told me. “So, I would track down the software teams and I would say, ‘Okay, where’s the machine learning team that’s working on handwriting?’ And I couldn’t find it.” It turned out the team he was looking for didn’t exist—a surprise, he said, given that machine learning is one of the best tools available for the feature today.
“I knew that there was so much machine learning that Apple should do that it was surprising that not everything was actually being done. And that has changed dramatically in the last two to three years,” he said. “I really honestly think there’s not a corner of iOS or Apple experiences that will not be transformed by machine learning over the coming few years.”
I asked Giannandrea why he felt Apple was the right place for him. His answer doubled as a succinct summary of the company’s AI strategy:
I think that Apple has always stood for that intersection of creativity and technology. And I think that when you’re thinking about building smart experiences, having vertical integration, all the way down from the applications, to the frameworks, to the silicon, is really essential… I think it’s a journey, and I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart sort of disappear.
Borchers chimed in too, adding, “This is clearly our approach, with everything that we do, which is, ‘Let’s focus on what the benefit is, not how you got there.’ And in the best cases, it becomes automagic. It disappears… and you just focus on what happened, as opposed to how it happened.”
Speaking again of the handwriting example, Giannandrea made the case that Apple is best positioned to “lead the industry” in building machine intelligence-driven features and products:
We made the Pencil, we made the iPad, we made the software for both. It’s just unique opportunities to do a really, really good job. What are we doing a really, really good job at? Letting somebody take notes and be productive with their creative thoughts on digital paper. What I’m interested in is seeing these experiences be used at scale in the world.
He contrasted this with Google. “Google is an amazing company, and there’s some really great technologists working there,” he said. “But fundamentally, their business model is different and they’re not known for shipping consumer experiences that are used by hundreds of millions of people.”
How does Apple use machine learning today?
Apple has made a habit of crediting machine learning with improving some features in the iPhone, Apple Watch, or iPad in its recent marketing presentations, but it rarely goes into much detail—and most people who buy an iPhone never watched those presentations, anyway. Contrast this with Google, for example, which places AI at the center of much of its messaging to consumers.
Quick primer: What is machine learning, exactly?
While computers can process certain data more quickly or accurately than humans can, they are still ultimately not intelligent. Traditional models of computer programming involve telling the computer what to do at all times, and in advance; if precisely this happens, then do exactly this. But what if something else happens—even a minor variation? Well, programmers can get quite creative and elaborate to define sophisticated behaviors, but the machine is incapable of making judgments of its own.
With machine learning, in addition to telling a computer what to do, programmers give it a data set relevant to the task and a methodology for analyzing that data set. They then give it time to spin its cycles getting more accurate at labeling or interpreting that data over time, based on positive or negative feedback. This allows the machine to algorithmically make informed guesses about data it hasn’t previously encountered, if the new data is similar to that with which it was trained.
When big tech companies talk about artificial intelligence today, they often mean machine learning. Machine learning is a subset of AI. Many lauded gadget features—like image recognition—are driven by a subset of machine learning called “deep” learning.
There are numerous examples of machine learning being used in Apple’s software and devices, most of them new in just the past couple of years.
Machine learning is used to help the iPad’s software distinguish between a user accidentally pressing their palm against the screen while drawing with the Apple Pencil, and an intentional press meant to provide an input. It’s used to monitor users’ usage habits to optimize device battery life and charging, both to improve the time users can spend between charges and to protect the battery’s longterm viability. It’s used to make app recommendations.
Then there’s Siri, which is perhaps the one thing any iPhone user would immediately perceive as artificial intelligence. Machine learning drives several aspects of Siri, from speech recognition to attempts by Siri to offer useful answers.
Savvy iPhone owners might also notice that machine learning is behind the Photos app’s ability to automatically sort pictures into pre-made galleries, or to accurately give you photos of a friend named Jane when her name is entered into the app’s search field.
In other cases, few users may realize that machine learning is at work. For example, your iPhone may take multiple pictures in rapid succession each time you tap the shutter button. An ML-trained algorithm then analyzes each image and can composite what it deems the best parts of each image into one result.
Enlarge / AI is behind Apple’s handwashing assistance feature in the Apple Watch.Sam Machkovech
Phones have long included image signal processors (ISP) for improving the quality of photos digitally and in real time, but Apple accelerated the process in 2018 by making the ISP in the iPhone work closely with the Neural Engine, the company’s recently added machine learning-focused processor.
I asked Giannandrea to name some of the ways that Apple uses machine learning in its recent software and products. He gave a laundry list of examples:
There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning.
It’s hard to find a part of the experience where you’re not doing some predicative [work]. Like, app predictions, or keyboard predictions, or modern smartphone cameras do a ton of machine learning behind the scenes to figure out what they call “saliency,” which is like, what’s the most important part of the picture? Or, if you imagine doing blurring of the background, you’re doing portrait mode.
All of these things benefit from the core machine learning features that are built into the core Apple platform. So, it’s almost like, “Find me something where we’re not using machine learning.”
Borchers also pointed out accessibility features as important examples. “They are fundamentally made available and possible because of this,” he said. “Things like the sound detection capability, which is game-changing for that particular community, is possible because of the investments over time and the capabilities that are built in.”
Further, you may have noticed Apple’s software and hardware updates over the past couple of years have emphasized augmented reality features. Most of those features are made possible thanks to machine learning. Per Giannandrea:
Machine learning is used a lot in augmented reality. The hard problem there is what’s called SLAM, so Simultaneous Localization And Mapping. So, trying to understand if you have an iPad with a lidar scanner on it and you’re moving around, what does it see? And building up a 3D model of what it’s actually seeing.
That today uses deep learning and you need to be able to do it on-device because you want to be able to do it in real time. It wouldn’t make sense if you’re waving your iPad around and then perhaps having to do that at the data center. So in general I would say the way I think about this is that deep learning in particular is giving us the ability to go from raw data to semantics about that data.
Increasingly, Apple performs machine learning tasks locally on the device, on hardware like the Apple Neural Engine (ANE) or on the company’s custom-designed GPUs (graphics processing units). Giannandrea and Borchers argued that this approach is what makes Apple’s strategy distinct amongst competitors.

Listing image by Apple

Source: https://arstechnica.com/gadgets/2020/08/apple-explains-how-it-uses-machine-learning-across-ios-and-soon-macos/