{"id":979,"date":"2020-09-14T16:21:20","date_gmt":"2020-09-14T16:21:20","guid":{"rendered":"https:\/\/blog.pufsecurity.com\/?p=979"},"modified":"2022-04-06T05:17:06","modified_gmt":"2022-04-06T05:17:06","slug":"pufseries-5-puf-based-root-of-trust-pufrt-for-high-security-ai-application","status":"publish","type":"dlp_document","link":"https:\/\/www.pufsecurity.com\/zh-hans\/document\/pufseries-5-puf-based-root-of-trust-pufrt-for-high-security-ai-application\/","title":{"rendered":"PUF Series 5: PUF based Root of Trust PUFrt for High-Security AI Application"},"content":{"rendered":"\n

<\/p>\n\n\n\n

Artificial intelligence will play a pivotal role in the future of information security. By combining big data, deep learning, and machine learning, AI give machines life; they can imitate human learning, replicate work behaviors, and bring new ways to operate businesses. However, AI assets are very valuable, making them the target of hackers. Once a hacker has an opportunity to discern how the AI model is trained and operated, the model can be easily manipulated. For instance, hackers can destroy the data in the training model, causing major disruption in both the supply and demand side of the entire AI system. Therefore, this article will discuss how to strengthen the security of AI systems from the structure of the AI hardware device, to the security requirements, solutions, and etc. To do this, we will use PUFsecurity\u2019s hardware root of trust module, PUFrt, as an example to help readers understand how combining AI application architecture and physical unclonable function (PUF) can benefit hardware security technology.<\/p>\n\n\n\n

Introducing the AI Hardware Device Architecture and Manufacturing Process <\/strong><\/p>\n\n\n\n

The main structure of an AI application device can be roughly divided into three sections: AI application algorithm model and parameters (soft know-how), storage unit (storage), and AI computing unit (AI accelerator). The storage unit usually uses flash memory to store AI application algorithm models and parameters, while the AI operation unit (AI chip) is used to perform operations on the AI algorithm model. From product design, to manufacturing, to implementing market applications, the main process will include:<\/p>\n\n\n\n

  1. Preparing AI model and parameters<\/li>
  2. Encrypting and protecting the AI model and parameters used for implantation and storing it in the storage unit<\/li>
  3. Writing the key and trust certificate used for the encryption on the AI chip, which will be used as the key and authentication information required for decryption when the program starts.<\/li>
  4. Once the AI application starts, the encryption algorithm model and parameters stored in the flash memory will be loaded onto the AI chip. After this is completed, the algorithm model and parameters are decrypted by the pre-implanted key and authentication information. Then, the AI chip will execute the decrypted AI algorithm model and parameters to start the AI application function.<\/li><\/ol>\n\n\n\n

    <\/p>\n\n\n\n

    \"\"
    Figure 1: AI device architecture and manufacturing process<\/figcaption><\/figure><\/div>\n\n\n\n

    <\/p>\n\n\n\n

    The security requirements of AI applications<\/strong><\/p>\n\n\n\n

    There are many aspects to the security requirements of AI applications. One is to protect important assets in AI design such as big data, algorithms, etc. The second is to protect AI machines from attacks through malicious third parties that secretly disrupt the learning mechanism or behavior of the machine when its performing deep learning or executing tasks. The third is to protect the relevant information involved in the AI technology such as personal medical privacy, communications, and other consumer information.<\/p>\n\n\n\n

    When discussing possible security gaps in AI applications, it is important to consider it from the perspective of the system architecture and discuss the issued related to the integrity of the AI chip hardware. If the chip (AI accelerator) that performs AI algorithm calculations experiences any integrity issues, such as loading a tampered firmware, it may lead to unauthorized functions or malicious attack commands being placed on AI chips. When this happens, it disrupts the operations because the original design function can be hijacked and controlled by the attacker. The original mechanism will therefore be unable to run effectively due to tampering, which can lead to a whole slew of security problems.<\/p>\n\n\n\n

    Why do people suggest including hardware root of trust to the design of AI chips when mitigating the concerns surrounding hardware integrity?<\/p>\n\n\n\n

    In the absence of a hardware root of trust, the chip\u2019s protection mechanism will be relatively easy to bypass. As a result, it will be unable to effectively protect the keys and trust certificates required for encryption and decryption. This leads to key leakage or inability to resist anti-assembly translation of reverse engineering. This results in giving malicious third parties a chance to obtain AI inference models. Consequently, the intellectual property (AI know-how) of a company will be exposed, which leads to a risk of being illegally copied and eventually, huge commercial losses. Furthermore, the attacker can also tamper with inference model which causes errors in the AI operation and user losses. For example, if the active safety function of ADAS (advanced driver assistance systems) suddenly fails during a drive, the driver can easily misjudge the situation and end up in a car accident.<\/p>\n\n\n\n

    <\/p>\n\n\n\n

    \"\"
    Figure 2: The security risks of AI applications<\/figcaption><\/figure><\/div>\n\n\n\n

    <\/p>\n\n\n\n

    How to effectively improve AI application security and asset protection<\/strong><\/p>\n\n\n\n

    It is important to consider the needs of AI application security and the protection of company assets when developing AI products. In addition to the performance of the AI algorithm model, the AI chip itself must have built-in hardware root of trust that is used simultaneously with other security measures to protect the key, strengthen the algorithm\u2019s storage, and ensure the originality of the chip design. That way, it will be easier to prevent the falsification and theft of the AI algorithm model and avoid the loss of business and intellectual property.<\/p>\n\n\n\n

    The executable security deployment are as follows:<\/p>\n\n\n\n