Let’s talk about a question a lot of people ask themselves… What is AI? The thing about artificial intelligence is that most people don’t really know what it is. A lot of what most individuals think they know about AI technology is derived from assumptions based on the clichés found in Hollywood rhetoric and politically-motivated paradigms, often impeding AI adoption. The real story is a lot less about an inevitable war between cyborgs and humanity and much more about advanced solutions, grounded in the age of digital transformation, that will propel us forward into this next era of technological evolution.
Origins of AI: How did it all start?
The general concept behind “artificial intelligence” dates far back to the times of ancient history with traces found in Greek mythology’s Talos, a gigantic bronze mechanical robot-like heroic figure with human intelligence that served as a guardian to the island of Crete against ill-intentioned outsiders and invaders. The concept maintained its headway with early period thinkers from Descartes’ – who posited the concept that the bodies of animals are nothing more than complex machines – to early-modern legends and modern fiction like Mary’s Shelley’s “Frankenstein”.
The origination of modern artificial intelligence is considered to have transpired in the 1950s alongside the dawn of the computer age. In 1950 Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” where Turing discusses the potential for a scenario where humans create a scenario in which “machines can think.” This led to the development of the earliest major proposal in artificial intelligence philosophy, the Turing Test. Currently, the Turing test serves an investigative technique to decide whether or not a software/computer/machine/etc. is capable of “thinking” like a human being. Of course, “thought” is a subjective ideology. Turing’s proposal also sought to dissect the term “think” and instead focus on the action of a machine’s performance capabilities and dexterity to create human cognitive competency.
In 1956, not long after Turing’s contributions, the term “artificial intelligence” was minted by John McCarthy as the subject matter of a Dartmouth Conference in which he presented a “Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”. This was a first-of-its-kind conference focused on the issue of artificial intelligence and is widely considered the moment “AI” was officially born and defined as “the science and engineering of making intelligent machines.” It was also at this conference that the sentiment “artificial intelligence is achievable” was accepted, ultimately igniting the spark that kicked off the following few decades of research in AI.
AI Classifications – What does it all mean?
There are scores of categorizations when it comes to classifying factions and features of artificial intelligence. However, boiling a vast pool down to a few of the most commonly consumed terms paints a fairly holistic picture for our purposes.
It’s important to note that some do not consider machine learning to be AI, but rather solely a field of computer science. However, the term is commonly used in conjunction with artificial intelligence. In that vein, we can (more or less) consider machine learning to be a branch stemming from both computer science and artificial intelligence. Machine learning strives to educate systems, using structured and/or labeled data, on how to absorb information and perform a specific task without requiring categorical programming. It is a method of data analysis that comprises constructing and amending models that permit programs to “learn” through experience and repetition. A few examples of instances wherein machine learning is utilized include image/speech recognition, financial services such as spend tracking, spam/malware email filtering, customer service chatbots, and many more.
Deep learning is a division of machine learning that employs numerous “levels” of neural networks, built to function in an unsupervised learning manner that emulates a human brain with the ability to “learn” from vast quantities of data, regardless of whether or not that data is unstructured, unlabeled, missing, or otherwise. Every layer of the neural network contains deep learning algorithms that carry out computations and make forecasts in repetition to learn and progressively boost the precision of the results/recommendations as time goes on. A few examples of instances wherein deep learning is utilized include digital assistants, financial fraud detection, self-driving vehicles, and many more.
Neural networks are structures of neurons that can adjust with variable data inputs, composed of a sequence of algorithms that seek to identify core connections within a group of data in a procedure that simulates how a human brain might function in order to identify and recognize patterns. “Neural networks take input data, train themselves to recognize patterns found in the data, and then predict the output for a new set of similar data. Therefore, a neural network can be thought of as the functional unit of deep learning, which mimics the behavior of the human brain to solve complex data-driven problems,” stated Pratik Shukla and Roberto Iriondo for Towards AI, a Medium publication.
Symbolic artificial intelligence is the practice of directly infusing human knowledge and expertise into a solution/system/machine/etc. by employing symbols, intelligible by humans, which characterize real-world notions and rationale to construct ‘rules’ that can then direct those symbols. This AI technique promotes a human-readable, rule-based technique, to produce transparent recommendations that are more easily understood by the people utilizing the solution. One example of an instance wherein a more symbolic-based AI approach is utilized includes conversational chatbots that use Natural Language Processing (NPL). “Natural Language Processing or NLP is a field of Artificial Intelligence that gives the machines the ability to read, understand and derive meaning from human languages,” stated Diego Lopez Yse for Towards Data Science, a Medium publication.
Cognitive AI is feasibly the most advanced form of artificial intelligence to date. It is a hybrid of conventional numeric AI (machine learning, neural networks, and deep learning), used in conjunction with symbolic AI to enable a system to produce transparent recommendations. Cognitive AI is an intelligent system that comprehends large quantities of variable data while applying situational awareness and codified expert human knowledge, expertise, and best practices to identify problems and recommend solutions to real-world challenges. “This unique hybrid AI combines the best of numerical/statistical approaches with the best of symbolic/logical techniques to become greater than the sum of its parts,” stated VentureBeat and Beyond Limits in a 2019 VBLab article – Beyond Conventional AI: More Intelligent, More Explainable AI.
In contrast to the “black box” issue concerning conventional AI approaches, Cognitive AI systems equate to Explainable AI solutions that can disclose the reasoning behind their recommendations. Cognitive AI systems have the ability to show human-users detailed information regarding the substantiations, contingencies, confidence-levels, and ambiguities behind their decision-making process through intelligible audit trails. The key to a successful Cognitive AI tool is to build a primary set of models and propose hypothetical extensions, resulting in systems that sustain distinctive competencies to utilize encoded human expert knowledge in conjunction with historical and other external data. This results in systems with the ability to model hypothetical paths that predict problematic scenarios then recommend remediation plans, regardless of whether or not data inputs are unstructured, unlabeled, missing, or otherwise.
Advanced AI Applications – How can they help solve real challenges in the real world?
More advanced, enterprise AI software and cognitive solutions, have been proving their value and making a permanent mark on this world from industrial AI to power and natural resources and renewable initiatives. “Enterprises are increasingly deploying AI systems to monitor IoT devices in far-flung environments where humans are not always present, and internet connectivity is spotty at best; think highway cams, drones that survey farmlands, or an oil rig infrastructure in the middle of the ocean,” said Beyond Limits’ CEO AJ Abdallat in an insideBIGDATA article – AI Hype: Why the Reality Often Falls Short of Expectations. “One-quarter of organizations with established IoT strategies are also investing in AI.”
Enterprise-level AI has been substantiating its power to transform businesses for the better by supporting leaders in extracting more value from their data and production processes by streamlining their entire operation at every level of the organization. Industries and businesses have been realizing more than just a return on AI investments; they are experiencing actual revenue from artificial intelligence investments (RAI).
In a recent Forbes Tech Council Article, AJ Abdallat explained how “AI is generating major revenue in major industries.” In the article Abdallat referenced a 2019 report by Morgan Stanley to illustrate that fact and spotlighted the following examples:
“-Machine learning is analyzing wind farms to make power predictions 36 hours in advance, enabling providers to make supply commitments to power grids a full day before delivery and increase the value of wind energy output by 20%.
-In Australia, mining companies are using autonomous trucks and drilling technology to cut mining costs, improve worker safety and boost productivity by 20%.
If U.S. utility companies used AI-powered asset management software, costs could be cut by $23 billion annually, reducing outage frequency, overall footprints, installation times and copper cabling usage.
-A European automaker built a “fully digitized” factory and significantly reduced manufacturing time while boosting productivity by 10%.”
The Future of AI: Where can it take us?
Outside of purely business-centric purposes, powerful AI solutions have the potential to help solve some of Earth’s most complex challenges. For example, the healthcare industry has also been discovering AI solutions fueled by historical medical data, lab work, literature, and expert human knowledge. Recently, powerhouse AI has been designed to aid doctors, nurses, and other leading industry professionals by reducing risk and improving patient outcomes at the point-of-care.
Lately, forecasting models such as Beyond Limits’ Coronavirus Dynamic Predictive Model have been designed to help provide some relief in humanity’s fight against the unexpected introduction of the COVID-19 pandemic. “This moment in time has uncovered just how crucial AI solutions are for the future of healthcare. Rapid changes have made it difficult to manage the pandemic’s spread and determine what the industry will look like after coming through the other side,” said AJ Abdallat in a recent Forbes Tech Council article. “Regardless of complications, it’s still the responsibility of leadership teams to use every tool at their disposal to manage the pandemic and be better prepared for the future. It matters that legitimate attempts are made by all – for the good of all – to pursue pioneering solutions in the face of this global challenge.
Another example of AI being used for good includes the exploration of applications designed to aid in the fight against climate change; generating hope for a more sustainable, renewable future with the aim to create a scenario in which humanity’s carbon footprint looks a lot less discouraging. “The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions),” wrote James Vincent in a 2019 article for The Verge on AI and climate change.
A 2019 article written by Simon Greenman for Towards Data Science, a Medium publication, also discusses the capacity for AI to, “Improve manufacturing efficiency by digitising, connecting and analysing end to end manufacturing processes. For example many global manufactures are using predictive AI modelling to make turbine combustion more efficient, reduce errors and energy wastage on the production line, and improve production efficiency with advanced robotics.”
The potential use cases for artificial intelligence solutions are seemingly endless. While AI may seem like nebulous ideation from an outsider’s perspective, and whether or not we are actively aware of the fact, it is ever-present throughout our surroundings and day-in-day-out lives. What used to be perceived as merely an intriguing plot-point for sci-fi narratives is now an inevitable, necessary, and welcomed reality. If you’re still asking yourself, “What is AI?” It may not be far-flung to boil it down to this statement: artificial intelligence is humanity’s focal point in their next stage of digital transformation and technological evolution.