To regulate entities and aid in better rulemaking, it is important to get the nomenclature right.
Ars Electronica — Flickr/CC BY-NC-ND 2.0
This article is part of the series — Colaba Edit.
In recent years, technology firms have made great strides in advancing artificial intelligence (AI). According to the Organisation for Economic Co-operation and Development, over 300 policy and strategic initiatives in 60 countries focus on the impacts of AI on society, business, governments, and the planet. To regulate these entities and aid in better rulemaking, it is important to get the nomenclature right.
Broadly, AI falls into two categories — artificial general intelligence (which seeks to emulate human intelligence) and artificial narrow intelligence (applied to domain-specific, defined tasks such as medical diagnosis or automobile navigation). Currently, most AI development is concentrated in the artificial narrow intelligence category. Many applications may converge to form artificial general intelligence, but it is not technically feasible right now. Several prominent figures such as Bill Gates, Elon Musk and physicist Stephen Hawking have expressed fear over the development of artificial general intelligence, which could potentially outthink humanity and pose as an existential threat. Thus, AI developments tend to focus on artificial narrow intelligence.