If you are a non-technical person learning a technical subject or trying to understand a technical field such as Artificial Intelligence (AI), let me share a simple tip that will help you a lot. Don’t let the complex-sounding technical terms confuse you. In due course, you will get comfortable with terminologies if you focus on first principles and try to get an intuitive understanding of the concepts.
So, let’s start with the basics and some working definitions. I’ve noticed the specific phrase “artificial intelligence and machine learning” and its shorthand AI/ML being used quite often. They even have more than 15 million and 5 million Google search results respectively. Clearly, their usage is quite common.
But let’s examine this phrase “AI and ML.” Such usage of both “AI” and “ML” in one single phrase is actually a marketing practice rather than any technical distinction. Artificial Intelligence is a broad field and Machine Learning is one of its branches. You can think of AI as the superset and ML as its subset. You won’t say “chai and masala chai” when serving a single cup of tea or sell “cars and red cars.” But hey, using “AI and ML” makes one sound like an expert and is also good for search engine optimisation.
Next, let me provide working definitions of AI and ML and draw a distinction between them.
What’s intelligence, anyway?
Intelligence is the ability to understand, reason, and generalise. Artificial Intelligence is machines or software having this capability. Intelligence involves the ability for abstraction or generalisation (or in layman terms, common sense). Hence, this kind of AI is also known as Artificial General Intelligence (AGI). In 2020, it may come as a surprise to you but AGI is not on the table at all. We are nowhere close to AGI nor is it clear whether we will ever achieve AGI. Machines with malice, emotions or consciousness, presuppose AGI and are limited to science fiction and movies.
What we instead have is artificial narrow intelligence. Narrow intelligence is a machine’s ability to perform a single task very well. Examples of such tasks include deciphering handwriting, identifying images, and recognising spoken text. Early approaches, since the 1950s, to master such tasks involved codifying human expertise as “rules” for computers to follow. It wasn’t possible to codify all rules and such rules-based expert systems worked well only in some limited scenarios.
Machine learning, a pattern recognition tool
A different approach is machine learning, where such rules are not explicitly programmed by humans, but the software is fed with large amounts of data to identify patterns and arrive at decision rules. Machine learning is where the software learns the examples it has been provided with and the “learning” refers the software becoming better with experience (with more data). In other words, machine learning is a great pattern recognition tool.
There are different types of machine learning methods which draw upon mathematics, probability, statistics and computer science for detecting these patterns. One particular set of machine learning techniques, popularly called deep learning, made rapid strides in recent years (and we will discuss deep learning in later columns) and is behind several modern machine learning applications.
These days when you see headlines such as “AI solves X”, “AI-powered” software or “AI-enabled” solution or my favourite “AI/ML”, it almost always refers to machine learning. Let me make two things clear. First, we made spectacular advances in machine learning in the last ten years. Second, it may not be AGI, but machine learning has a wide variety of uses for consumers, businesses, and governments.
When to use AI/ML
So, what are the takeaways from our discussion of AGI vs ML as you try to utilise ML in your organisation or business?
Machine learning is simply a pattern recognition powerhouse. It seems intelligent but does not have what we consider common sense. To use the example of an “AI-powered” TV camera which mistook the football referee’s bald head for the football and focused there instead of the actual match play, it was an amusing illustration of a mismatched pattern. No serious damage and everyone actually got a good chuckle out of this.
But in some situations, the mistakes are costly, even fatal. Take the case of a self-driving car being tested in Arizona in 2018. The algorithm knew to identify pedestrians and cyclists. But the data it was fed did not include a person pushing their cycle and walking alongside it. Arguably, a human driver would not have had a difficulty recognising the pedestrian. The algorithm’s failure to recognise the scenario contributed to an accident resulting in the pedestrian’s death.
As a manager who is looking to leverage AI, you should have a good grasp of the nature and scope of machine intelligence, its narrow scope of application, and be able to draw the boundaries beyond which AI will break down. Based on these, you can decide when to rely on AI and when not to. Good managers are expected to make decisions with imperfect data or limited data. AI can’t do that!