Breaking News

Implementing Strong Security For AI/ML Accelerators – SemiEngineering

A number of critical security vulnerabilities affecting high-performance CPUs identified in recent years have rocked the semiconductor industry. These high-profile vulnerabilities inadvertently allowed malicious programs to access sensitive data such as passwords, secret keys and other secure assets.
The real-world risks of silicon complexityThe above-mentioned vulnerabilities are primarily the result of increased silicon complexity. This is because security flaws frequently occur when multiple components interact in unexpected ways. As silicon complexity increases, the number of possible interactions increases exponentially, along with the number of potential security vulnerabilities. Like high-performance general-purpose CPUs, artificial intelligence (AI) and machine learning (ML) accelerators are inherently complex and require specific security features to protect valuable training sets and data from attackers.
AI/ML threat vectors: From the edge to the data centerWithin the realm of threats, an attacker could physically disassemble edge devices with AI/ML inference accelerators deployed in the field. If unprotected, an attacker could run malicious firmware, access and alter data, intercept network traffic and employ various side-channel techniques to extract secret keys and other sensitive information. A remote attacker could target servers with AI/ML accelerators running training and inference applications in the data center. As an example, a remote attacker could subvert the host CPU hypervisor and access any process or memory region. Additional server attack vectors include reading the flash memory in both the host and the accelerator, as well as the contents of the SSD. Further, an attacker could run malicious software on the host CPU and accelerator CPU and read the contents of SRAM and DRAM (both on the host and accelerator CPUs). Lastly, even network and bus traffic could be monitored and altered via unprotected AI/ML accelerators.
It is also important to note that both AI/ML inference models, as well as input data and results, are increasingly valuable and must be protected from criminal elements intent on financial gain. Indeed, this data can be stolen to design cloned or competitive devices. As well, the integrity of AI/ML systems must be protected from tampering to prevent malicious attackers from altering training models, input data and results. Tampering could be catastrophic for certain applications such as autonomous vehicles on a highway, with manipulated or false data causing accidents, injury and potentially loss of life. Another example could see the alteration of facial recognition data enable an attacker to physically breach sophisticated security systems protecting sensitive facilities. Spoofed data could also help an attacker trick an airport baggage scanner system into ignoring specific contraband material.
Protecting AI/ML systems with a programmable security co-processorTo protect both AI/ML silicon and data, accelerators should be built on a secure, tamper-proof foundation that ensures confidentiality, integrity, authentication and availability (up-time). This can be achieved with a programmable secure co-processor that is purpose-built to provide a wide range of comprehensive security functions. These include hardware-based encryption, hashing and signing, key management, provisioning, authenticating, as well as proactive monitoring to detect anomalous activity.
A secure co-processor can protect AI/ML silicon and applications from malicious attacks in a number of ways:

First: a security co-processor effectively protects edge devices with AI/ML accelerators deployed in the field from physical side-channel attacks such as Simple Power Analysis (SPA), Differential Power Analysis (DPA) and fault injection with a range of sophisticated countermeasures.
Second: it protects AI/ML firmware (both in edge devices and server AI/ML accelerators in the data center) from tampering with secure boot functionality using hashing and signing. This capability is particularly critical, as an attacker could introduce malicious firmware by disrupting the boot flow or hijacking the firmware update process.
Third: it prevents data theft and ensures the integrity of training data and inference models by signing and verifying before use (hashing data), while encrypting training data when not in use (managing keys).
Fourth: a security co-processor prevents data theft and ensures the integrity of input data by authenticating and securing communication with the source (using provisioned keys and IDs), while encrypting user data in transit to the accelerator (managing keys).
Fifth: it ensures the integrity of inference results by authenticating and securing communication between the accelerator(s) and additional system components using provisioned keys and IDs.
Lastly, a security co-processor protects AI/ML accelerators against a dynamic threat landscape by providing an enclaved and shielded location for security applications to proactively monitor system operation for anomalous activity.

ConclusionAI/ML systems will have a growing importance across every industry. The inherent complexity of AI/ML accelerators require specific security features to protect valuable training sets and data from attackers. This is true for both edge devices with embedded AI/ML inference accelerators, as well as servers with AI/ML accelerator cards used for both training and inference. As such, AI/ML accelerators should be built on a secure, tamper-proof hardware foundation that ensures confidentiality, integrity, authentication and availability.

Paul Karazuba   (all posts)
Paul Karazuba is director of corporate product marketing at Rambus.

Source: https://semiengineering.com/implementing-strong-security-for-ai-ml-accelerators/