In the 18th century, Hungarian inventor Wolfgang von Kempelen created a never-before-seen chess-playing machine. The automaton, called the Mechanical Turk, could handle a game of chess against a human player, and pretty well with that: it even defeated Napoleon Bonaparte in 1809, during a campaign in Vienna.
It was eventually revealed that von Kempelen’s invention was an elaborate hoax. The machine, in reality, secretly hid a human chess master who directed every move. The Mechanical Turk was destroyed in the mid-19th century; but hundreds of years later, the story provides a telling metaphor for artificial intelligence.
A common narrative that surrounds AI is that the technology has agency. We hear that AI can solve climate change, build smart cities and find new drugs, and less often that in fact, it is a human programmer who is using an AI system to achieve all of those feats. Just like the human chess master hid behind von Kempelen’s ingenious mechanism, so do engineers, programmers, and software developers disappear behind the algorithm.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
The relevance of the Hungarian machine is such that Amazon borrowed the name for one of its units – one that is often less known than Prime or Fresh. Amazon Mechanical Turk is a division of the company that crowdsources the tedious job of labeling the huge datasets that feed AI systems to millions of remote “Turkers”.
“I’m amazed that every four months or so, I catch a Tweet from someone who realizes what Amazon’s Mechanical Turk is,” Daniel Leufer, Mozilla fellow and technologist, tells ZDNet. “I find it fascinating that Amazon calls a platform designed to mask the human agency behind AI Mechanical Turk. We’re not even hiding what we’re trying to do here.”
Leufer has just put the final touches to a new project to debunk common AI myths, which he has been working on since he received his Mozilla fellowship – an award designed for web activists and technology policy experts. And one of the most pervasive of those myths is that AI systems can and act of their own accord, without supervision from humans.
It certainly doesn’t help that artificial intelligence is often associated with humanoid robots, suggesting that the technology can match human brains. An AI system deployed, say, to automate insurance claims, is very unlikely to come in the form of a human-looking robot, and yet that is often the portrayal that is made of the technology, regardless of its application.
Leufer calls those “inappropriate robots”, often shown carrying out human tasks that would never be necessary for an automaton. Among the most common offenders feature robots typing on keyboards and robots wearing headphones or using laptops.
The powers we ascribe to AI as a result even have legal ramifications: there is an ongoing debate about whether an AI system should own intellectual property (a proposal refuted by the European Patent Office and the UK Intellectual Property Office), or whether automatons should be granted citizenship. In 2017, for instance, Shibuya Mirai became the first chatbot to be granted residency in Tokyo by the Japanese government.
The current representation of AI feeds into the perception that the technology comes in one form, and one form only: a super-powerful system capable of general intelligence – that is, of performing intelligently across a range of complex tasks, and eventually completing anything that a human can do.
Although achieving such a sophisticated form of artificial intelligence is not a prospect envisaged by many scientists, it seems to be the narrative that dominates even the highest level of geo-politics. “There is an entire narrative around the race for AI supremacy going on between the US, China and Europe,” says Leufer. “That just doesn’t make sense.”
“If you believe we’re headed towards an end-point, where a super-intelligence will grant you technological supremacy, then maybe it makes sense, but that’s not the case. This is not a zero-sum game,” he continues.
In reality, AI as we know it is still narrow. It can only solve a range of single tasks, and the step up to general intelligence is still far away in the future. But even if the anticipation of super-intelligence is currently unfounded, the consequences of misrepresenting the technology are very real.
Leufer takes the example of facial recognition, which he believes needs to be banned across the EU. The response he got from regulators, he argues, shows a lack of understanding of the technology.
“The idea is that this is a part of AI, and AI is inevitable, so we’ll have to adopt it eventually and we better develop it ourselves so it is imbued with European values,” says Leufer. “But AI is not just one technology. There are many ways you can use it.”
SEE: CIO Jury: 58% of tech leaders say robotics will play a significant role in their industry within the next two years
Becoming a leader in industry robotics doesn’t have to go hand-in-hand with developing facial recognition, just because both tap AI-enabled capabilities. It might be less exciting than the prospect of a super-intelligence, but AI is not one huge technology waiting to be cracked. In other words, artificial intelligence is not an all or nothing.
And so, as countries around the world race to develop all potential AI applications, regulation is crucial to make sure that the development of what Leufer calls “creepy stuff” is limited.
He is currently working with German NGO AlgorithmWatch to push for the creation of public registers for AI systems, in which public authorities and governments would have to provide basic information about the ways that they are using the technology, together with risk assessments, and even a way for citizens to contest the application.
“At the moment we’re working in the dark, we don’t know what’s being used,” says Leufer. Super-intelligent humanoid robots might still be a long way off, but narrow AI isn’t short of issues that need fixing right now.