{"id":3224,"date":"2022-06-01T01:57:58","date_gmt":"2022-06-01T01:57:58","guid":{"rendered":"https:\/\/www.pufsecurity.com\/?post_type=technology&p=3224"},"modified":"2022-07-05T18:04:09","modified_gmt":"2022-07-05T18:04:09","slug":"artifical-intellagance","status":"publish","type":"technology","link":"https:\/\/www.pufsecurity.com\/zh-hans\/technology\/artifical-intellagance\/","title":{"rendered":"Artificial Intelligence"},"content":{"rendered":"\n

Securing AI system operations <\/h2>\n\n\n\n

Data protection remains vital to ensuring the safety of AI system operations. Integrity checking and secure boot\/update, for example, are undoubtedly mission-critical. Furthermore, for certain AI applications (such as smart cars, healthcare, smart locks, and industrial IoT), a successful attack would not only affect the safety of their data-set but also potentially endanger lives.<\/p>\n\n\n\n

Unlike a standard CPU, which is ready to use out of the box, a neural network processor for AI must be taught how to make the correct inferences before being put to use. Therefore, system designers must include the training stage when planning for system security. This means that besides the hardware itself (including the neural network accelerator), other related attack surfaces need to be considered. These include:<\/p>\n\n\n\n