Breaking News

How Close We Are to Fully Self-Sufficient Artificial Intelligence – Interesting Engineering

If you followed the world of pop-culture or tech for some time now, then you know that advances in artificial intelligence are heating up. In reality, AI has been the talk of mainstream pop-culture and sci-fi since the first Terminator movie came out in 1984. These movies present an example of something called “Artificial General Intelligence.” So how close are we to that?
No, not how close are we to when the terminators take over, but how close are we to having an AI capable of navigating nearly any problem it’s presented with.
What is artificial general intelligence?
Technically defined, artificial general intelligence or AGI is a machine that has the capacity to understand or learn intellectual tasks to the aptitude that humans can. Most AIs today are highly specialized. 
[embedded content]
Computer programmers and scientists utilize machine learning algorithms to develop specialized AIs. Those are artificially intelligent algorithms that are as good if not better than humans at one specific task. For example, playing chess or picking which square in a segmented picture has a street sign in it, i.e. Captchas.
Recent advances in AI and machine learning, while not technically close to real AGI, have created a sense that AGI is close, as in a matter of years or decades. It also doesn’t help that you have some fo the world’s top minds like Elon Musk calling out AI as one of the biggest existential threats to human existence of all time. 
Some of the biggest advancements in AI today have been artificial neural networks, which are technologists’ way of mimicking the way that human brains work with code. That said, defining what exactly makes something intelligent is hard…
Humans have multiple forms of intelligence, and we likely need to take a closer look at just what it means to be “intelligent” if we want to determine how close artificial general intelligence is to reality.
What does it mean for something to be intelligent?
Humans both have intelligence when it comes to problem-solving as well as emotions. Emotional intelligence is arguably a trait that makes something more human, whereas the ability to solve problems and understand things is something computers have been mimicking since the beginning of their existence. 
Machine AIs are around about the same level as a four-year-old toddler when it comes to taking IQ tests, so they’re not quite to human level of deduction yet, either.
RELATED: 10 ESSENTIAL TED TALKS ON ARTIFICIAL INTELLIGENCE
Emotional intelligence, however, is going to be the harder task for artificial general intelligence to conquer. Emotions are fluid and inexact, not something that works well with the hard-coded nature of machines. The other facet to emotional intelligence is understanding the tone and meaning behind objects. Like, if someone waves a white flag in battle, a computer might recognize it as what it is, a white flag waving. However, our emotional intelligence gives us context and understanding that the waving of the white flag is likely a call for surrender. 
So, true intelligence incorporates the ability to problem-solve and understand with the ability to interpret and read between the lines. This is also true on not only the receiving side, but also on the giving side. Meaning, in order for computers to have artificial general intelligence, they need to not only understand human tone and context, but they also need to be able to dish it out.
Believe it or not, machines and AIs are making progress in these areas of intelligence. Natural Language Process and Generation algorithms are bringing us closer to AIs that can talk and sound like us, at least on the surface. Google Home and Amazon Alexa are giant data pools that allow programmers to design better and better AIs. What better way to teach a computer how to talk then by putting it in humans’ homes and telling the humans to talk to it? Data is key to developing AI. 
That said, anyone who owns a Google Home or Amazon Alexa might agree, based on those points alone AGI is probably pretty far off. 
The next step in this pursuit of understanding AGI and determining how close we are to artificial general intelligence is looking at artificial consciousness. This is the idea that AI can achieve a sort-of personhood or at least a belief in its own personhood. 
Understanding artificial consciousness
Artificial consciousness brings us to a more ethical discussion of AGI. Can a machine ever achieve consciousness in the same way humans can? And if it could, would we need to treat it as a person?
Scientifically speaking, consciousness comes directly from biological input being interpreted and reacted to by a biological creature, such that the creature becomes its own thing. If you remove the clarifying word of “biological” from that definition, then it’s not hard to see how even existing AIs could already be considered conscious, albeit stupidly conscious. 
[embedded content]
One thing that defines human consciousness is the ability to recall memories and dream about the future. In many aspects, this is a uniquely human capability. If a machine could do this, then we might define it as having artificial general intelligence. Dreams are superfluous to logical life, yet, they define our existence as humans. If a computer could dream for itself, not because it was programmed to do so, this might be the biggest indicator that AGI is here. 
Now that we have a deeper grasp of consciousness, we can make some definitions around what it would take for a machine or AI to be artificially generally intelligent. It needs to be able to process and understand emotions, problem solve, express forms of emotion, and perhaps most importantly, it needs to have a rough consciousness.
How far are we from artificial general intelligence?
So, is a machine or algorithm like this ever possible, and if so, how far off is it?
In theory, everything we just talked about an AI doing is possible. It’s just not highly practical with the technology we have now. The processing power to essentially create the human brain is enormous, but quantum computing might be our gateway to successfully creating artificial general intelligence. 
RELATED: SHOULD WE FEAR ARTIFICIAL SUPERINTELLIGENCE
From a technology perspective, we’re pretty far off from being able to create AGI. However, given how fast technology advances, we may only be a few decades. Experts expect and predict the first rough artificial general intelligence to be created by around 2030, not too far off. However, experts expect that it won’t be until 2060 until AGI has gotten good enough to pass a “consciousness test”. In other words, we’re probably looking at 40 years from now before we see an AI that could pass for a human. 
Regardless of whether you think that’s a good idea or not, there’s probably not any way of stopping the creating of an artificial generally intelligent being. Our world will be forever different once that happens. In reality, we can never go back once we cross that line, for better or for worse. 
Source: https://interestingengineering.com/how-close-we-are-to-fully-self-sufficient-artificial-intelligence