Securing AI system operations
Data protection remains vital to ensuring the safety of AI system operations. Integrity checking and secure boot/update, for example, are undoubtedly mission-critical. Furthermore, for certain AI applications (such as smart cars, healthcare, smart locks, and industrial IoT), a successful attack would not only affect the safety of their data-set but also potentially endanger lives.
Unlike a standard CPU, which is ready to use out of the box, a neural network processor for AI must be taught how to make the correct inferences before being put to use. Therefore, system designers must include the training stage when planning for system security. This means that besides the hardware itself (including the neural network accelerator), other related attack surfaces need to be considered. These include:
- training data
- trained AI model
- input data
- inference results
In addition, unlike a standard computer system where the majority of attacks target data (at rest, in transit, and in use), the unprotected, trained model of an AI system is a very tempting prize to either steal or corrupt. Since the entire training process requires a non-trivial amount of time and effort for the collection of training data (and for the training time itself), theft or tampering with the trained model represents a significant loss of company resources/proprietary knowledge. Moreover, imagine that a house uses a smart camera surveillance system, intrusion and hazard wouldn’t be inferred and detected correctly if the hacker manages to alter the AI model or the input streaming images.
Thus, properly securing the trained model and its operation rests upon preventing the theft and/or tampering of the above-listed four attack surfaces. Careful implementation of security protocols using the cryptographic functions of privacy (anti-theft) and integrity (tampering detection) can mitigate against such attacks.
Given the importance of security for AI systems, it comes as no surprise that today’s designers must make security a primary consideration from the beginning of the design cycle. And as AI systems are continually evolving, their security processes also need to improve in step to constantly protect the keys to the kingdom – the secret key stored in the AI SoC itself.