Breaking News

AI Ethics Forewarns Don’t Be Caught In Machine Learning Myopia By Overlooking Neuro-Symbolic AI, Including For Autonomous Self-Driving Cars

Some say this town isn’t big enough for both of them.

What am I referring to?

Well, it could be that I am invoking an ever-popular and longstanding memorable line from an old-time film about two sparring cowpokes. Yes, in the now richly classic cowboy movie The Western Code, which was released in 1932, an ex-lawman hero rides into town and confronts the local ruffian. A head-to-head confrontation ensues in which one harshly tells the other to get out of town.

Here’s the dialogue that you surely are familiar with since it inevitably historically went viral (of a sort) and has variously been used and revitalized in countless plots since then: “I’m getting tired of your meddling. This town ain’t big enough for the both of us and I’m going to give you 24 hours to get out. If I see you in Carabinas by this time tomorrow, it’s you or me!” (quoted from the movie The Western Code).

Nearly identical lines of such hearty dialogue can be readily subsequently found in all kinds of movies and TV shows, including ones about the Wild West, ones about dueling detectives, dueling lawyers, dueling spies, and so on. I openly admit though that I am not particularly referring to the original line and instead revamping yet another advantageous incarnation of it, doing so to succinctly describe an ongoing and potentially enduring confrontation taking place in the burgeoning field of Artificial Intelligence (AI).

You see, the AI field is mired in a somewhat subtle and yet increasingly vocal and acrimonious battle raging between the sub-symbolics versus the symbolics. One camp is telling the other camp to get out of town. A stare-down is taking place. Guns are drawn. The townspeople are cowering and waiting to see what will happen.

MORE FOR YOU

This heady situation of today’s AI could also be likened to the infamous family spats between the Hatfields and the McCoys, those two legendary families in the West Virginia and Kentucky areas that fiercely feuded in the late 1800s. But in lieu of Hatfield and McCoy going toe-to-toe, we have the modern-day AI-focused sub-symbolics camp and the still-in-there AI symbolics family rancorously battling each other.

Here’s how this in-house AI feuding came to be.

And, I might add, we need to give this crucial attention since it could determine the future of AI and perhaps accordingly the future of society if you assume that AI is indeed a budding transformative force that will alter what we do and the world that we live in. This divisive behind-the-scenes debate comes filled with AI Ethics and Ethical AI ramifications. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Find a cozy spot to sit down and enjoy the tale of joy and woe, as it were.

To begin with, you might be scratching your head that you’ve seemingly not ever heard of the AI sub-symbolics and nor recognize the moniker of saying that there are AI symbolics either. Don’t feel disheartened or embarrassed since those are not the usual everyday popularized catchphrases (it is the semi-secret jargon used by AI insiders).

I would be willing to bet my treasured gold coin that you’ve undoubtedly heard of Machine Learning (ML) and Deep Learning (DL), which for sake of discussion is the frontman for AI sub-symbolics. The use of ML/DL is the talk of the town. Headlines that blare out the latest in AI are bound to toss around the Machine Learning or Deep Learning phrasing and catch your attention. The wording does indeed roll off the tongue.

By and large, you can construe the stoutest devotees of ML/DL as being card-carrying members of the AI sub-symbolics camp. The basis for being labeled as sub-symbolic is that the crux of the ML/DL approach entails trying to attain a semblance of computationally modeled “intelligent” behavior by using relatively arcane mathematical formulations.

Qualms about this sub-symbolic approach include the difficulty of being able to explain how the ML/DL has reached some choice or made a decision of one kind or another. The whole shebang can be so mathematically dense and complex that you are not going to easily discover what led to a specific outcome. Instead, there is a data-infused and computed morass that you have to hope did the right thing and calculated out an appropriate answer. This lack of interpretability has given rise to a subfield of AI known as Explainable AI, typically abbreviated as XAI, and for which the aim is to craft AI that can somewhat explain itself, see my coverage at the link here.

I want to share with you additional background about the nature of ML/DL but before I do so, it might be helpful for me to briefly indicate what the contrasting AI symbolics camp portends. This will give you a comparative basis for why this matter has become a Hatfield-McCoy feud or a classic “get out of town” rivalry.

Shifting gears, turn your eye toward the symbolic AI camp.

The symbolics camp is undoubtedly known to any of you that are aware of AI that arose in the 1980s and 1990s. This was the earlier heyday of AI that touted the use of knowledge-based systems (KBS) or sometimes anointed as expert systems (ES). The gist was that AI systems were being devised based on explicitly identifying the various rules or logic that underlie presumed “intelligent” behavior. You would work with a human expert in say medicine or finance, ferret out what kinds of logical rules they use, and establish a so-called computer-based knowledge base that kind of replicated those articulated cognitive rules. These were considered rules-based systems (RBS) or equivalently known as KBS or ES.

Eventually, the rules-based approach and the symbolics camp fell into disfavor. The prevailing opinion was that we could only elucidate a fraction of the rules that really underlie knowledge. In addition, a confounding factor was the use of common-sense knowledge. It seemed that to get to use expert knowledge you often would require a basis or foundation that contained common-sense knowledge, such as knowing that birds fly, the sky is blue, and other normal everyday awareness. We still to this day have not successfully cracked the code on how to computationally embody common sense, though the pursuit stridently continues, see my analysis at the link here.

You might also be aware that when the AI mania of the late 1990s began to recede, the AI field was somewhat set aside and no longer given the same outsized attention it once had. This was the era when the AI field went into a bit of a slump, notoriously now said to be the dour AI Winter. This wintery naming was catchy because AI ostensibly went into hibernation and was no longer stomping around like Godzilla.

The hoped-for AI that was to be based on a symbolic approach appeared to have been a bust. Sadness ensued. Wintertime was solemn and left many unsure of what the AI field would do next. Symbolics avenue got a bad reputation as being a bit of a dead-end. No sense in putting your eggs in that basket.

Meanwhile, the eventual advent of Machine Learning and Deep Learning was percolating behind the scenes. Outsiders seem to think that ML/DL just magically appeared out of nowhere. You might compare this notion to those massively popular movie stars that you sometimes think had an overnight success and had sprung out of the blue. Most of them were toiling away for years in bit roles and taking on any acting engagements that they could.

The same case can be made for ML/DL. An effort was going toward advancing ML/DL. It wasn’t until more modern times that the needed computational resources became available. This coincided to a great extent with the emergence of cloud computing. In addition, lots of data is typically needed for ML/DL, for which large-scale databases and other data lakes and data warehouses were being put together and made available.

A type of grand convergence of improvements in ML/DL, along with readily available datasets, and combined with less costly computing that could be accessed and shared remotely were all coming together to aid in bolstering the movement toward adopting and using ML/DL. The new kid on the block had been years in the making.

Here are some key takeaways from this brief history lesson about the trials and tribulations of the AI field:

  • There is little doubt that today’s AI is abundantly focused on Machine Learning and Deep Learning, thus knowingly or unknowingly embracing the AI sub-symbolics camp.
  • There is scant doubt that today’s AI foregoes even a modicum of attention toward the AI symbolics camp, whereby the use of KBS, ES, and RBS or similar tech are all relegated to the backroom and rarely given any room to breathe. Some blatantly ignore the symbolics, while others express outright disdain for it.

We are now poised to explore the question of who is supposed to get out of town.

There are outspoken proponents of the sub-symbolic approach that go out of their way to insist that the sub-symbolic avenue is the only way to ultimately attain an aspired fully achieved AI that would presumably slide into sentience. In that insistent stance, they outrightly denigrate the symbolic camp. The point seems to be that all other styles or approaches of AI ought to bow down to the sub-symbolics approach and toss in the towel. Take a knee. Do not waste precious resources on the “losing” side, and instead jump on the winning side.

As an aside, I am not suggesting that everyone is saying this, and only highlighting what some of the more vocal and staunchly adherent ML/DL proponents are stating or implying. If you are fully into the sub-symbolics camp and yet are not taking this rather adamant and oppositional stance, your willingness of openness is well-acknowledged.

But back to those that are staking out their turf. It is one thing to try and showcase your turf, while it is another to urge that all others give up their turf too. Going that extra mile of pushing out the other homesteaders would seem unduly overbearing. In essence, some of the sub-symbolics proponents seem to believe that all the air in the room should rightfully and exclusively be wholly reserved for the ML/DL devotees. You either have a cattle ranch or you don’t. No agricultural farming is allowed.

Borrowing from the Old West, this town ain’t big enough for the both of them (i.e., the sub-symbolics and the symbolics), and furthermore, the sub-symbolics radical adherents are going to give you symbolics a tightly bounded 24 hours to get out of town (so exhorts sub-symbolics of that mindset).

We don’t know for sure what the 24 hours consists of, which is obviously metaphorically being used and not of a literal nature. One supposes that some sub-symbolics would want the 24 hours deadline to have already passed and ergo that symbolics should already have dropped their gear and left town, riding as far away and as fast as they can. Disappearing beyond the horizon and never to be seen again. The sun has set on the symbolics.

Others might be more “reasonable” and suggest that the symbolics can take a few months or possibly even a couple of years to close the books. Start now though by gradually putting aside those grant requests. Pack up the labs. No need to continue publishing since it is hopeless anyway. All told, be getting into a position of being able to turn out the lights and go do something else, as long as it isn’t immersed in that symbolic “malarky” that has nothing other than disappointment and futility.

Wow, you can imagine how those are fighting words.

Should we go ahead and opt to put all our eggs into the AI sub-symbolic basket?

This raises thorny AI Ethics considerations and prompts related questions about the present and future of AI.

For example, you might be thinking this is merely infighting within the field of AI and does not rise to a level of importance that others ought to weigh in. Let the AI insiders go toe-to-toe with the other AI insiders. They need to figure out what is right and what is possibly misguided. No need and no basis for anyone else to intervene.

The challenge of that perspective is that if you believe that AI is going to profoundly change society and our lives, the notion of letting AI insiders work things out might not produce the results that society as a whole would cherish. Each day we are witnessing a tsunami of new AI-based applications that are being avidly shoveled out into society. It would seem rather narrowminded to simply allow the AI insiders to ostensibly make such a likely monumental and consequential choice on their own accord.

It takes a village to craft and suitably field AI.

Before getting into the meat of those several paths, let’s establish some additional foundational particulars.

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage further by exploring what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s now return to the debate about who is considered in town, and likewise who ought to be in town or ought to not be in town.

The glory right now is clearly shining on the AI sub-symbolics camp and especially the expanding use of Machine Learning and Deep Learning. One worry is that even aspects such as our approach to AI Ethics are being subliminally warped by the preoccupation with ML/DL. In essence, some AI Ethics methods concentrate only on AI that is composed of the ML/DL facilities. They don’t seem to acknowledge that a likewise set of AI Ethics methods also needs to be established for the AI symbolics camp. It is as though the ML/DL approach is the only game in town.

Fortunately, the essence of most AI Ethics precepts is broad enough to apply to both camps. Nonetheless, there is usually a swaying of showing how to implement Ethical AI in ML/DL more so than in more AI symbolic means. You can point out that this makes sense since the preponderance of AI is being done in ML/DL these days, thus you get more bang for the buck in terms of AI Ethics tuned to the hotter approach.

Does all of this suggest that the only way forward is via the AI sub-symbolics path?

For those that believe we will eventually attain sentient AI, or perhaps Artificial General Intelligence (AGI), one line of thinking is that it will be exclusively via scaling up with our existing sub-symbolics ML/DL efforts. Throw more and more computing power into the mix. Make those computational pattern matching models massive in size. Go big, as they say, and don’t look back.

Those same proponents might also make a seemingly logical assertion that any attention and use of our limited AI development resources toward other avenues of AI is not only misguided but altogether detrimental to wanting to arrive at sentient AI or AGI (for my coverage on AGI see the link here). The claim is that the symbolic camp is inadvertently draining energies that properly should be going to the sub-symbolic realm. In this viewpoint, the AI symbolics family is both misaimed and worse still diluting and distracting from AI sub-symbolics.

In short, per this dogmatic perspective, AI symbolics needs to get out of town (or, alternatively, drop the symbolics “charade” and embrace the true path of the AI sub-symbolics). There is only room for one cowpoke in these hereabout parts, namely the rough and ready AI sub-symbolics.

Researcher Gary Marcus describes the rise of what is referred to as “Alt Intelligence” in his new substack blog: “Alt Intelligence isn’t about building machines that solve problems in ways that have to do with human intelligence. It’s about using massive amounts of data – often derived from human behavior – as a substitute for intelligence. Right now, the predominant strand of work within Alt Intelligence is the idea of scaling. The notion that the bigger the system, the closer we come to true intelligence, maybe even consciousness” (posting of May 14, 2022, entitled “The New Science Of Alt Intelligence”).

All of this could be quite tempting. If indeed the AI sub-symbolics avenue is the right and only path, it would be indubitably sensible to have everyone jump into the water and be swimming together toward the AI aspirational goal. We should seemingly support the winning horse and not spend time on those areas of AI that apparently will hold us back or slow down the best path.

Are we or should we make that kind of a wildcard bet?

Suppose all the AI symbolics agree without hesitation to join the ranks of the AI sub-symbolics. No more AI symbolic efforts exist. All of that AI symbolics-oriented technology, research, methods, and the rest of the kit and kaboodle are packed into boxes and allowed to gather dust.

If the guess was right that AI sub-symbolics was the chosen one, okay, we made a great choice, though perhaps lucky it be. Sometimes you do win the lottery.

If the guess was wrong and we hit some limitations of AI sub-symbolics that perhaps we didn’t see coming, the question will be whether we shot our own foot by having stopped efforts on the AI symbolics angle. Sure, we could begrudgingly then say that we should restart on the AI symbolics front, but this might be a lot harder than it seems. Those that once were at the forefront of AI symbolics might no longer be available or might find that doing a reboot is arduous and we have actually fallen behind rather than merely being perfectly waiting in limbo.

In fact, some strident proponents of AI symbolics argue that AI sub-symbolics are definitely going to hit a gigantic roadblock. AI symbolics will be the only sufficient means of getting past that hurdle. Yet if AI symbolics has been battered and treated atrociously all that time, the chances of AI symbolics being able to expediently swing to the rescue will be grievously hampered. Putting all your eggs into the wrong basket can have adverse consequences.

You don’t find many in the AI symbolics camp that are urging the AI sub-symbolics to leave town.

I mention this because you might naturally assume that what is good for the goose is good for the gander. When the AI sub-symbolics opt to talk up sub-symbolics and simultaneously talk down the AI symbolics, it would be a nearly irresistible urge to have the AI symbolics camp talk up symbolics and trounce the AI sub-symbolics family. There isn’t much of that going around. Remarkedly so, especially in comparison to the bashing being meted out by some in the AI sub-symbolics camp upon the heads and careers of the AI symbolics followers.

Into this mix comes the proverbial middle ground or mashup that is being referred to as neuro-symbolic AI or simply hybrid AI.

Here’s the deal.

We could encourage both cowpokes to live in this same town and possibly even get them to work together. They don’t have to be on widely separated sides of the territorial landscape. They don’t need to warily eye each other and anxiously have their trusty holstered revolver at the ready. Instead, they could (heaven forbid) mutually respect each other and try to dovetail together to attain sentient AI or AGI.

When you bring together the AI sub-symbolics camp and the AI symbolics camp you get yourself a mashup that is jointly sub-symbolics and symbolics or to be equally fair it is jointly symbolics and sub-symbolics. The easiest catchphrase is to call this neuro-symbolic AI, meaning a combining of the artificial neural network (ANN) underpinnings within Deep Learning in conjunction with the symbolic AI. But since neuro-symbolic sounds like a lofty or esoteric concept, it is easiest to just say hybrid AI.

Lest you think that this seems like a viable and amicable solution, realize that rarely in life is there a free lunch. There are AI sub-symbolics that decry the neuro-symbolic or hybrid AI approach. Some exclaim that you are attaching a ten-ton weight to AI sub-symbolics that will slow progress. You are once again draining resources from the purity of AI sub-symbolics.

On and on it goes.

Some would argue that we must pursue the neuro-symbolics or hybrid AI else the AI symbolics arena will be left adrift and continue to be castigated as not worthy. By getting attached to the currently heralded AI sub-symbolics and ML/DL, the symbolics will still have relevancy and be kept in the game. This is our insurance policy underlying the pursuit of AI. You see, if the AI sub-symbolics camp doesn’t get us to sentient AI or AGI, the AI symbolics will still be up and running, ready to save the day.

As an aside, some furtively whisper that this will be akin to the return of the Jedi.

In whatever way you want to characterize the situation, the bigger picture would seem to be whether we are willing to pick one particular path at this time and commit to just that path, forsaking all others. It is a gutsy move. I would dare say a risky move in that if the AI is that vital to society, we could be doing a disservice to society all told and find ourselves boxed in with AI that has reached lesser limits than we were striving to reach (admittedly, there is a debate about whether we even should be trying to reach sentient AI or AGI, see my discussion at the link here).

In general, I tend to concur with the sentiment expressed in the earlier cited piece on Alt Intelligence: “Let us all encourage a field that is open-minded enough to work in multiple directions, without prematurely dismissing ideas that happen to be not yet fully developed. It may just be that the best path to artificial (general) intelligence isn’t through Alt Intelligence, after all” (per Gary Marcus at his substack blog as earlier mentioned herein).

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the utility of neuro-symbolic AI or hybrid AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And That Hybrid AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall begin by heaping praise upon the use of ML/DL in the realm of bringing forth AI-based self-driving cars. Several key aspects of self-driving cars have come to fruition as a result of using AI sub-symbolics techniques and technologies encompassing Machine Learning and Deep Learning. For example, consider the core requirement of having to detect and analyze the driving scene that surrounds an AI-based self-driving car.

You’ve undoubtedly seen videos or pictures of self-driving cars that showcase the myriad of mounted sensors on the autonomous vehicle. This is often done on the rooftop of the self-driving car. Sensor devices such as video cameras, LIDAR units, radar units, ultrasonic detectors, and the like are typically included on a rooftop rack or possibly affixed to the car top or sides of the vehicle. The array of sensors is intended to electronically collect data that can be used to figure out what exists in the driving scene.

The sensors collect data and feed the digitized data to onboard computers. Those computers can be a combination of general-purpose computing processors and specialized processors that are devised specifically to analyze sensory data. By and large, most of the sensory data computational analysis is undertaken by ML/DL that has been crafted for this purpose and is running on the vehicle’s onboard computing platforms. For my detailed explanations about how this works, see the link here and the link here, just to name a few.

The ML/DL computationally tries to find patterns in the data such as where the roadway is, where pedestrians are, where other nearby cars are, and so on. All of this is crucial to being able to have the self-driving car proceed ahead. Without the ML/DL performing the driving scene analysis, the self-driving car would be essentially blind as to what exists around the autonomous vehicle.

In brief, you can readily make the case that the use of ML/DL is essential to the emergence of AI-based self-driving cars. This in turn can be touted as a success story for the AI sub-symbolics approach.

But there is some confusion by many that somehow the entirety of the Artificial Intelligence aspects regarding self-driving is perhaps nothing other than ML/DL. This is not the case. Many other vital elements of the AI self-driving stack use a variety of other AI techniques and technologies. Whether you are willing to call those other AI elements as being symbolic might be argued, though referring to those other elements as being sub-symbolic would almost assuredly not be applicable.

Let’s examine those other facets of an AI driving system.

Once the computational analyses are done of the driving scene, the other parts of the AI self-driving capability have to figure out what driving actions are to be undertaken. There is usually a virtual model being kept computationally that represents the driving scene. The AI seeks to plan out the next moves of the autonomous vehicle. In addition, the AI needs to send commands to the driving controls as befits whatever next action the AI has ascertained should be performed by the self-driving car.

All in all, you could reasonably assert that a modern-day AI-based self-driving car contains both AI sub-symbolics-related components and also contains AI symbolics-oriented components. Those components all need to work in harmony. It is fair to say that a well-devised self-driving car is effectively a neuro-symbolic AI system or an example of hybrid AI.

We now return to the two gunslingers facing off with each other. Recall that there is a contentious posture by some that the AI sub-symbolics approach is the only cowpoke that belongs in AI town. We can ask some pointed questions about this, doing so in the context of AI-based self-driving cars.

Could we dispense with the portions of the AI stack that are not AI sub-symbolics?

In today’s world, no, since this would pretty much cut out a sizable chunk of the AI driving system and the self-driving car would be unable to sufficiently operate on a roadway.

Could we exclusively use AI sub-symbolics and devise a self-driving car without making use of any other kind of AI?

Generally, the answer would seem to be no.

Some though theorize that we could ultimately do so. Their reasoning is as follows. We know that humans drive cars. We know that humans use their brains and minds for undertaking the driving act. If you are ascribing that the AI sub-symbolics approach is the equivalent or will someday achieve the equivalent of what happens in human brains and minds, this suggests that the only AI needed to drive a car would indeed be AI sub-symbolics. This seems rather ironclad. Maybe so, maybe not. There are various holes in this logic, which I’ve discussed elsewhere in my columns.

Conclusion

Should anyone be getting out of town?

The idea that some part of the AI constellation of techniques and technologies is more important than the other is an ongoing and somewhat reasoned debate. An unsettling overreach seems to be the growing claim that only one part is the “proper” approach to all of AI, implying or explicitly demanding that the other AI approaches should be abandoned in favor of that vaunted part.

In William Shakespeare’s Romeo and Juliet there is a notable line that says this: “Two households, both alike in dignity.” Perhaps we can garner from that telling line the realization that dignity should be bestowed on both the AI sub-symbolics and the AI symbolics with equal fervor.

A mounting concern is that if this bickering continues to simmer and ferment, we might see the AI field become unduly splintered and fragmented. It could turn back the clock or at least confound advances in AI. You might recall another line from Romeo and Juliet that comes to mind, namely that the bickering could bring “A plague o’ both your houses!”

A final comment for now and a slight twist to this tale.

Suppose that the implied notion of dividing AI into two camps consisting of sub-symbolics and symbolics is itself a misleading bifurcation (i.e., a false dichotomy). It could be that to attain sentient AI or AGI that we have to revisit and revamp our existing assumptions and see the world of AI as consisting of completely different terms. In that sense of things, the moniker “hybrid AI” might be the most suitable naming and encompass a catchall of AI avenues that we might not yet even have thought of.

Let’s give Shakespeare the last word on this: “What’s in a name? That which we call a rose by any other word would smell as sweet.”