The future is here. Find out how artificial intelligence affects everything from your job to your health care to what you’re doing online right now.
Artificial intelligence, better known as AI, sounds like something out of a science-fiction movie. It brings to mind self-aware computers and human-like robots that walk among us. And while those things are part of the overarching artificial intelligence definition and may exist in the future, AI is already a big part of our everyday lives. So, what is artificial intelligence, exactly? It’s complicated, but every time you use Siri or Alexa, you’re using AI, and that’s just the beginning of its practical applications.
“The main benefit of AI is that it can bridge the gap between humans and technology,” says AI researcher Robb Wilson. “AI will allow everyone to communicate with computers the same ways they communicate with other humans: through speech and text. This can have the massive benefit of putting the problem-solving capabilities of powerful technology in everyone’s pocket.”
If you’re curious about AI, you’re not alone. In a recent Reader’s Digest survey, 23% of respondents said they were interested in learning more about it. It’s an important topic because the future of AI will shape everything from the internet to medical technology to our workplaces—for better and for worse. While AI will open up a whole new world with real robots helping in ways you probably never imagined, we’ll also have to contend with a changing job market, as well as unintended AI bias. We spoke to technology experts to break it all down. Here’s what you need to know.
What does artificial intelligence mean?
In a nutshell, artificial intelligence is simply a machine that can mimic a human’s learning, reasoning, perception, problem-solving and language usage. An AI computer is programmed to “think,” and this process hinges on programming that is called machine learning (ML) and deep learning (DL).
With ML and DL, a computer is able to take what it has learned and build upon it with little to no human intervention. But there are a few key differences between the two. In machine learning, a computer can adapt to new situations without human intervention, like when Siri remembers your music preference and uses it to suggest new music. Deep learning, on the other hand, is a subset of machine learning inspired by the structure of the human brain, says Lou Bachenheimer, PhD, CTO of the Americas with SS&C Blue Prism, a global leader in intelligent automation. As you may have guessed, this helps it to “think” more like a person.
Essentially, machine learning uses parameters based on descriptions of data, whereas deep learning also uses data that it already knows. In a real-world application, deep learning might help a digital worker easily decipher and understand handwriting by learning a variety of writing patterns and comparing it with data about how letters should look. AI will also play a big role in the metaverse in the future.
The history of AI
In 1935, Alan Turing envisioned machines with memory that could scan that memory for information. That idea eventually spawned the first digital computers, and in 1950, Turing developed a method to assess whether a computer is intelligent. The Turing Test involves asking a number of questions and then determining if the person responding is a human or a computer. If the computer fools enough people, it is considered thinking or intelligent.
It wasn’t until 1955, however, that scientist John McCarthy coined the term “AI” while writing up a proposal for a summer research conference. McCarthy later became the founding director of the Stanford Artificial Intelligence Laboratory, which was responsible for the creation of LISP, the second-oldest programming language and the one primarily used for AI.
Today, we have all kinds of “thinking” computers and robots. Have any passed the Turing Test? Yes. In fact, a chatbot recently fooled a panel of judges into thinking it was a 13-year-old boy named Eugene Goostman. Google AI has also passed the test. Does that mean these computers are sentient beings? No. Many say that the Turing Test is outdated and needs to be revised as a way to determine if a computer is actually thinking like a human. Currently, no computer actually thinks like a human.
How does AI work?
This essentially boils down to how AI learns, and it’s a lot like how a parent might teach a child. “When it was young and immature, AI was trained using lots of rules and patterns, which made systems like IBM’s Deep Blue really good at chess,” says Wilson about the program that was able to beat grand master Garry Kasparov in a chess match in 1997. “As AI has matured, it’s been trained more through trial and error. The AI makes mistakes, and like a parent, humans provide it with course correction and necessary context. As AI gets better at certain things, some of the rules established early on can be removed (much like a child earning more independence), creating further opportunities for growth.”
Of course, it doesn’t have a human brain’s neurons. Instead, a computer uses programming given to it by a human, or its algorithms process data to learn.
AI’s ability to get smarter over time makes it capable of producing solutions for previously unsolvable or challenging problems, according to Beena Ammanath, leader of Technology Trust Ethics at Global Deloitte and author of the business guide Trustworthy AI. For example, AI can learn to see connections in data sets that are way too complex for humans. This can lead to innovations like engineering better traffic flow in cities or predicting health problems in large demographics of people, and it can work with virtual reality to create digital models and other immersive experiences.
What are the four types of AI?
Yuichiro Chino/Getty Images
Artificial intelligence comprises four different types of AI. These types are then subdivided into two distinct groups called strong and weak AI.
Types of AI
The four types of AI are reactive machines, limited memory machines, theory of mind machines and self-aware AI. Each is progressively more complex and gets just a little closer to being like the human mind.
Reactive machines: This is the most basic AI. These machines don’t have memories to draw upon to help them “think.” They know how things should go and can even predict how something might happen, but they don’t learn from their mistakes or actions. For example, the chess computer Deep Blue could predict its opponent’s moves, but it couldn’t remember past matches to learn from them.
Limited memory machines: The next advancement of AI, limited memory machines can remember and adapt using new information. Social media AI uses this technology when it recalls previous posts you’ve liked and offers up similar content. The information isn’t gathered to be used long-term, though, like with the human mind. It serves a short-term purpose.
Theory of mind machines: Science hasn’t yet reached this phase of AI. With theory of mind, the machine is able to recognize that humans and animals have thoughts, emotions and motives, as well as learn how to have empathy itself. With humans, this ability allowed us to build societies because we could work together as a group.
Self-aware AI: The most advanced form of AI, this describes a computer that has formed a consciousness and has feelings. At this point, machines will be able to think and react like humans, like what we see in sci-fi movies.
Strong AI vs. weak AI
Strong and weak AI are separated by how “smart” the AI has become. With strong AI (also known as artificial general intelligence or AGI), a machine thinks like a human. Weak AI, or narrow AI, is the dumber version—and the one we currently have. Experts are split on when we will achieve strong AI. Many experts believe that it could happen within the next 50 years, though some say there’s a small chance that it could happen in the next decade.
With strong AI, a computer could learn, empathize and adapt while performing many tasks. It could be used to create robot doctors or many other professions that take both emotional intelligence and technical ability that grows and evolves as the robot learns through experiences. This is similar to personal health-care companion Baymax in the movie Big Hero 6 or the public servant robots in the movie I, Robot.
Weak AI enables the machine to do a task with the help of humans. Humans are needed to “teach” the AI and to set parameters and guidelines on how the AI should respond to perform its tasks. Siri, Alexa, Google Assistant, self-driving cars, chatbots and search engines are all considered weak AI.
Artificial intelligence examples
Now that you know the answer to the question “What is artificial intelligence?” you might be wondering where it is. The fact of the matter is that AI is everywhere in our world. Here are just a few common ways you interact with it on a daily basis without even realizing it.
One of the most famous examples of early AI was the chess computer we noted earlier, Deep Blue. In 1997, the computer was able to think much like a human chess player and beat chess grand master Garry Kasparov. This artificial intelligence technology has since progressed to what we now see in Xboxes, PlayStations and computer games. When you’re playing against an opponent in a game, AI is running that character to anticipate your moves and react. If you’re a gamer, you’ll definitely be interested in the difference between AR and VR—and how AI relates to both.
Another example of artificial intelligence is collision correction in cars and self-driving vehicles. The AI anticipates what other drivers will do and reacts to avoid collisions using sensors and cameras as the computer’s eyes. While current self-driving cars still need humans at the ready in case of trouble, in the future you may be able to sleep while your vehicle gets you from point A to point B. Fully autonomous cars have already been created, but they are not currently available for purchase due to the need for further testing.
Currently, doctors are using artificial intelligence in health care to detect tumors at a better success rate than human radiologists, according to a paper published by the Royal College of Physicians in 2019. Robots are also being used to assist doctors in performing surgeries. For example, AI can warn a surgeon that they are about to puncture an artery accidentally, as well as perform minimally invasive surgery and subsequently prevent hand tremors by doctors.
Plus, robots come in handy when organizing clinical trials. AI can pick out possible candidates much more quickly than humans by scanning applications for the right ages, sex, symptoms and more. They can also input and organize data about the candidates, trial results and other information quickly.
Comparison shopping and customer service
Don’t want to pay more? AI can help. “The insurance company Lemonade is a good example,” says Wilson. “They’re relative newcomers to the space but have already disrupted the business model used by old-guard insurance giants. Users have easy access to policies and policy information through an intelligent bot, Maya, who continually receives rave reviews from customers.” Lemonade claims their customers save up to 80% on their insurance costs with a paperwork-free signup process that takes less than 90 seconds.
Similarly, China’s Ant Group has upended the global banking industry by using AI to handle their data and deal with customers. “As the 2020s were about to dawn, Ant surpassed the number of customers served by today’s largest U.S. banks by more than 10 times—a stat that’s even more impressive when you consider that this success came before their fifth year in business,” notes Wilson.
The impact of AI in the workplace
One survey from 2018 found that 60% of the companies surveyed were using AI-enhanced software in their businesses. A few short years later, AI is everywhere in the workplace. From search engines to virtual assistants, and from plagiarism detectors to smart credit and fraud detection, there’s probably not an industry that doesn’t use some form of AI technology.
Though it’s hard to predict just how AI will be used in the future of work, it is already making the workplace more enjoyable and efficient by taking over more mundane tasks like data processing and entry. In a 2022 study by SnapLogic, 61% of workers surveyed said that AI helps them create a better home/life balance, and 61% believed that AI made work processes more efficient.
Pros of AI
The Industrial Revolution created machines that amplified the power of our bodies to move and shape things. The Information Revolution created computers that could process enormous amounts of data and make calculations blindingly fast. AI is performing dynocognesis, which is the process of applying power to thinking, explains Peter Scott, author of Artificial Intelligence and You and founder of Next Wave Institute, an international educational organization that teaches how to understand and leverage AI.
By essentially being a heavy-lifting machine for thought, AI has the power to advance industries like health care, medicine, manufacturing, edge computing, financial services and engineering. “With the right set of tools and diverse AI, we can harness the power of the human-to-machine connection and build models that learn as we do, but even better,” says Ammanath. It can also enhance the performance of 3D printing, not to mention eliminate human error in the process.
According to Ammanath, some benefits of AI include:
Identifying patterns through the analysis of vast amounts of complex information.
Using natural language processing to engage with people in more human-like ways. For example, it will be harder to tell if a chatbot is a human or a computer.
Expanding human capabilities, which will help to create new development opportunities and products. In the same way machinery helps humans lift heavy objects, AI will help humans think big thoughts.
Allowing companies to remove more human bias and improve security measures to increase transparency.
Cons of AI
Of course, some problems have popped up as we venture into this new territory. For starters, as AI capabilities accelerate, regulators and monitors may struggle to keep up, potentially slowing advancements and setting back the industry. AI bias may also creep into important processes, such as training or coding, which can discriminate against a certain class, gender or race.
“Overall, the tool using AI and its ethical implications or risks are going to depend on how it is being used,” says Ammanath. “There is no single set of procedures that define trustworthy AI, but valuable systems should be put into practice at each institution to prevent the risks of AI development and utilization, as well as to actively encourage AI to adapt as the world and customer demands change.”
Another barrier to AI is the fear that robots with AI will take away jobs. Of course, just like with any other automation advancement, new jobs have been created to improve and maintain automations. According to research by Zippia, AI could create 58 million artificial intelligence jobs and generate $15.7 trillion for the economy by 2030.
Some jobs will be lost, though. According to that same research, AI may make 375 million jobs obsolete over the next decade. We’re already seeing some jobs disappear. For example, toll booths that were once run by humans have been replaced with AI that can scan license plates and mail out toll bills to drivers. And travel sites run by AI that can find you the best flight or hotel for your needs have almost completely obliterated the need for travel agents.
The biggest problem lies in the fact that newer jobs created by AI will be more technical. Those unable to do more technical work due to lack of training or disabilities could be left with fewer job opportunities.
What the future of AI holds
As AI progresses, many scientists envision artificial intelligence technology that closely mimics the human mind, thanks to current research into how the human brain works. The focus will be on creating more innovative, useful AI that is affordable.
Ethical AI creation will also be an important part of future AI development. “People are concerned about ethical risks for their AI initiatives,” says Ammanath. “Companies are developing artificial intelligence boards to drive ethical behavior and innovation, and some are working with external parties to take the lead on instigating best practices.” This guidance will ensure that remedies for issues like AI bias will be put into place.
Will the world ever have self-aware AI? Experts are split on this one. Some say that with current innovations, we might one day see a machine that feels and has real empathy. Others say that consciousness is something only biological brains can achieve. For this level of AI, only time will tell.
Now that you know the ins and outs of artificial intelligence, learn about Web3 and how it will affect the future of the internet.
Robb Wilson, AI researcher the founder of OneReach.ai
Columbia Engineering: “Artificial Intelligence (AI) vs. Machine Learning”
Lou Bachenheimer, PhD, CTO of the Americas with SS&C Blue Prism, a global leader in intelligent automation
Washington Post: “Google’s AI passed a famous test — and showed how the test is broken”
Beena Ammanath, leader of Technology Trust Ethics at Global Deloitte and author of Trustworthy AI
Government Technology: “Understanding the Four Types of Artificial Intelligence”
MIT: “When will AI be smart enough to outsmart people?”
Future Healthcare Journal: “The potential for artificial intelligence in healthcare”
Deloitte: “State of AI in the enterprise”
SnapLogic: “Employees Want More AI in the Workplace”
Peter Scott, author of Artificial Intelligence and You and founder of the Next Wave Institute
NIST: “There’s More to AI Bias Than Biased Data, NIST Report Highlights”
Zippia: “23+ Artificial Intelligence and Job Loss Statistics ”