Breaking News

Do You Trust Your Artificial Intelligence?

image

Getty

One of the most transformational technologies of our age is artificial intelligence (AI). Security risks and pertinent questions should be foreseen and elucidated before the technology starts to conquer the world. That is why AI security-related consideration is of paramount importance.

Once a new innovation emerges, people become concerned about security issues — sometimes too late. Take a look at the advancement of different technologies, beginning with networks, which began to develop in the 1980s. The growth of network security solutions proceeded in the 1990s. Afterward, an era of personal computers and anti-viruses began in the 2000s. Later, there were applications and the application security boom in the 2010s. As for AI, since the second half of the 2010s was the era of smart technology, in the 2020s, we will likely see a demand for AI security.

The thing is, we should care about this possible future today. Numerous critical incidents have taken place, such as the Tesla car incident, and thousands of research papers have already been released. I recently published several technical articles describing why AI security is currently important, but let’s look at different steps that have been taken in the initiatives and regulations area.

International AI regulations

There is a growing international effort to develop regulations and ethical guidelines for AI.

On June 29, 2019, G20 leaders signed a statement endorsing a few basic ethical principles for AI. These strategies feature some countries’ approaches to AI and ethics, but also mention security as part of these initiatives.

The U.S. is one of the first countries to discuss the potential problems of AI security, such as adversarial attacks, in its AI strategy “The National Artificial Intelligence Research And Development Strategic Plan: 2016.” One of the strategies in the document was to ensure the safety and security of AI systems at every stage of the AI system life cycle. Later, many leading countries published similar documents in which security was among the considered issues.

In 2017, the U.K. government interim strategy considered AI to be a key technology trend and a necessary tool in identifying and responding to security threats. For example, the government published a set of high-level security principles for connected and automated vehicles (CAV) and intelligent transport systems (ITS) and smart cities, where it identified what good cybersecurity looks like. It also developed an automotive-specific framework for security assessment to help the industry benchmark its products during the design and development stage and worked out a guide on how to manage risks in a supply chain.

Montreal and Toronto have become hot clusters of AI research. Canada’s focus on ethics quickly led to some of the earliest international AI ethical principles. “Pan-Canadian Artificial Intelligence Strategy” was published on March 22, 2017, under the leadership of the Canadian Institute for Advanced Research (CIFAR).

Next was Japan. On March 31, 2017, in its “Artificial Intelligence Technology Strategy,” Japan focused mainly on the cultural and social aspects of artificial intelligence development, and security was mentioned as one of the four priority areas alongside productivity, health and mobility.

In April 2017, China generated the most state-supported AI governance and ethics initiatives. This “Artificial Intelligence: Implications for China” paper raised complex ethical, legal and security questions touching such issues as privacy, discrimination and liability.

Since then, more than 15 countries, including Singapore, South Korea, UAE and others, have published various documents mentioning AI security, privacy, safety and trustworthiness.

AI security initiatives

A lot of AI security initiatives were launched in the first quarter of 2019 in the U.S. and the European Union:

  • February 2, 2019: “Secure, Assured, Intelligent Learning Systems (SAILS) and Trojans in Artificial Intelligence (TrojAI)” (U.S.)
  • February 6, 2019: “Guaranteeing AI Robustness against Deception (GARD)” (U.S.)
  • February 2019: “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (U.S.)
  • April 4, 2019: “The Ethics Guidelines for Trustworthy Artificial Intelligence” (European Union)
  • June 26, 2019: “EU guidelines on ethics in artificial intelligence: Context and implementation” (European Union)

These documents consider theories to create robust machine learning models resistant to attacks as well as examine the landscape of possible AI threats and ways to prevent or better mitigate these risks. Different areas of policy and governance are applied to AI and its implementations, including technology and data protection.

In 2020 and 2021, the global dissemination of AI security documents is expected. Further research can be conducted to analyze emerging AI security solutions and initiatives. In addition to being aware of AI-security initiatives, we need to follow recommendations for implementation and train AI developers, cybersecurity specialists and IT teams on how to operationalize AI security in organizations or take more practical steps like AI security assessments.

As seen above, the number of initiatives is growing and is expected to increase even more in the future. Taking into account that early adopters started to release their strategies in 2017 and were ready to show detailed documents on the trustworthiness and security of AI only two years later in 2019, we can predict that many of the countries that joined later with their AI initiatives will act in the same way.

We expect that those who published AI strategies in 2018 will show their own documents on the topic of AI security, safety and trust in 2020 and 2021. Also, some of these concepts will be able to form the basis of real law requirements in the near future, not just vague thoughts on the topic.