The Promise of Formal Methods to PUF Security Assurance

Abstract

Due to the emerging technology nature of PUF as new security primitives and the lack of comprehensive understandings on specific vulnerabilities that PUFs might introduce and the specific attacker models that are germane to PUFs, currently we are facing a situation that not only the methodologies and criteria for testing and assessing the security of PUFs are still evolving, but also the effective mitigations and secure design rules are still under intensive research.

In this column, we will first discuss the importance of providing high security assurance in PUF. Next, we will address the potential vulnerabilities of PUFs and the limitations of the existing PUF security testing/verification approaches. Finally, we will assert why and how formal methods can be used to significantly improve the security verification of PUF and raise the confidence level in PUF security assurance.

Introduction

PUF began to gain attention in the smartcard market during 2010 to 2013 as a promising way to provide “silicon fingerprints” and creating cryptographic keys that are unique to individual smartcards. Nowadays, PUF has even established itself as a secure alternative to battery-backed storage of secret keys (e.g., HSM, TPM) in commercial FPGAs and SoCs, such as the Xilinx Zynq Ultrascale and Altera Stratix 10. Due to the emerging technology nature of PUFs as new security primitives and the lack of comprehensive understandings on specific vulnerabilities that PUFs might introduce and the specific attacker models that are germane to PUFs, we face a situation that not only the methodologies and criteria for testing and assessing the security of PUFs are still evolving, but also the effective mitigations and secure design rules are still under intensive research.

Formal methods have been defined as a “particular kind of mathematically based techniques for the specification, development and verification of software and hardware systems [1]. The use of formal methods for software and hardware design offers the potential of enhancing the reliability and robustness of a design by performing appropriate mathematical analysis.” Although formal methods provide the possibility of excluding out every security vulnerability in the designed system, however, “there is a lack of familiarity with verification tools among researchers and academics working on hardware/software security architectures. This results in designs that are only tested with certain security property test cases, which may not be exhaustive, and architectures whose security properties lack formal specifications.” Therefore, an exhaustive methodology for verifications should be the prioritized target for all researchers in this area. “Recently, the use of formal methods for security verification of both the hardware and the software of a system has emerged as an important research topic” [2].

The evolution of hardware security research has recently moved away from the malicious IC logic (e.g., hardware Trojan) detection and turned towards trustworthy hardware development for the construction of the hardware root-of-trust (HRoT) [3]. A hardware RoT is defined as a minimum set of hardware and software that is trusted to provide security functions such as key management, device identity authentication and run-time monitoring of the system. The leading candidate for HRoT is PUF which can generate a unique highly secure ID / secret key inexpensively, reliably, resiliently and securely. Thus, PUF is a holy grail RoT solution for hardware security.

“Since hardware is almost impossible to patch once it is fabricated, there is a strong need to verify the security of the hardware components at design time. The lack of formal verification could leave some vulnerabilities in the designed system. For example, Intel’s Core 2 Duo processor family is known to have 129 bugs [2, 4]. The number of known design bugs evidently illustrates the limitations of the existing verification approaches.” Please note that these bugs have not even considered security yet. “This fact has motivated researchers to look into the verification of the security properties of these hardware systems at design time, and into developing new methods for the security verification of hardware systems.”

Furthermore, a number of designs from processor vendors also provide some hardware features for security, e.g. ARM TrustZone [5], Intel SGX [6], and most recently AMD Memory Encryption [7]. “These designs all rely on the implicit assumption that the hardware is correct.” However, “these industry designs do not have any publicly available formal security guarantees nor specifications.”

In this column, we will begin with the discussion on the importance of providing high-level security assurance in PUF. Next, we will address the potential vulnerabilities of PUFs and the limitations of the existing PUF security testing/verification approaches. Finally, we will conclude with why and how formal methods can be used to significantly improve the security verification of PUF and raise the confidence level in PUF security assurance.

The importance of providing high security assurance in PUF

Since PUF is a holy grail HRoT solution that is trusted to provide security functions such as key management, device identity authentication and run-time monitoring of the system, thus PUF is becoming the foundation for security for every system. An exploitable vulnerability in PUF itself or its configuration in a larger integrated system (e.g., SoC IC) could have disastrous consequences affecting the overall larger systems or having impacts on all parties across the whole IC production supply chain. Consequently, it is necessary to ensure: 1) PUF itself is built securely and 2) PUF is securely integrated in a larger system which is fundamental to ensure system security. PUFs are gaining wide usage for securing the IoT (edge to datacenter) and datacenter secure computing and storage applications. Every secure system moving forward is likely to have a PUF as its foundation.

There are many efforts to develop novel methodologies and testing/evaluation criteria/framework/guideline to assess the security of PUFs, such as FIPS 140-2, Common Criteria (CC), NASA Framework for Assessing Security and Trust in Micro Electronics (FASTIME), NIST SP800-193 Platform Firmware Resiliency Guidelines (PFR), and EU UNIQUE Project, … etc. Therefore, it is very important to build a secure, trusted and resilient PUF platform, which is compliant to and/or certifiable by the prominent PUF security assessment regulating agencies to provide an effective, efficient and resilient security solutions for the protection of the highly important sensitive information owned by an enterprise against cyberattacks in the IoT/AIoT/5G eras…

A guide to “UNIQUE Project”

Since PUFs, as a new promising trend solution in hardware security, have lots of advantages in terms of generating unique, unpredictable and highly secure secrets/keys for each IC inexpensively, there are still potential vulnerabilities and the limitations of existing PUF security testing/verification approaches. Before introducing the vulnerabilities and the limitations of PUFs, we may simply classify attacks specific to PUFs which have the potential to materially affect the security properties of any system building on the PUF primitive. The PUF-specific attacks include:

  1.  Fault Attacks on PUFs and Fuzzy Extractors
  2.  Side Channel Attacks on PUFs and Fuzzy Extractors
  3.  Modelling Attacks on Delay-Based PUFs, etc

Furthermore, there are three attacker types, namely, a non-invasive attacker, a semi-invasive attacker and an invasive attacker based on the attacker‘s tools and capabilities used to attack PUFs as described in [8].

When existing product-focused security evaluation methodologies (e.g., the Common Criteria (CC)) is used to evaluate PUF-based systems, it is necessary to be aware of these as well as any other specific vulnerabilities that PUFs might introduce. Actually, in the EU UNIQUE Project, evaluation gaps were identified when CC was applied to evaluate PUF-based products. The gaps are wider in the security functional requirements than the security assurance requirements of the CC. While there is virtually no evaluation gap has been identified in the security assurance requirements, there are a few gaps existing in the security functional requirements when CC is used to evaluate PUF technology.

For example, as discussed in [8], when we use the Security Functional Components (SFCs) defined in CC part 2 to specify the identification and authentication families of the Security Functional Requirements (SFRs) expressed in a Protection Profile (PP) or a Security Target (ST) of the PUF-based systems, we will see some improvements are necessary to ensure a correct evaluation of the PUF‘s characteristics. As we know, “the main security aspect addressed by PUF technology is the uniqueness of the challenge-response pairs. This intrinsic characteristic enables the complete identification of each individual PUF product TOE sample. Compared to common identification technology (e.g., the one used in smart card products), the identification of the TOE is not based on an asset stored inside the TOE during the manufacturing phase (e.g., an identifier) but on intrinsic characteristics of TOE. A PUF can be seen as a kind of fingerprint of the product.” We should understand the difference between common identification technology and the identification of the TOE. “It is therefore necessary to specify this through a new set of Security Functional Requirements. This underlines the main limitation of the CC part 2 requirements. Indeed, even if the CC part 2 catalogue were to contain a class dealing with identification and authentication mechanisms, these families (from the FIA7 class) focus on users and not on the TOE itself.” Based on this example, “we need to have a way in order to identify genuine TOEs from potential counterfeit ones. This specific functionality could be specified in an extension to the Common Criteria part 2 Security Functional Requirements.”

To sum up, “it is necessary to develop methodologies to evaluate PUF security and identify those areas in the CC that need refining in order to accommodate evaluations of PUF-based systems.”

A guide to “Detect and Prevent Security Vulnerabilities in your  Hardware Root of Trust”

In this chapter, we are going to talk about how formal methods can be used to significantly improve the security verification of PUF. It starts with the definition of formal methods, and then we go through the Tortuga logic whitepaper. We will also arrive at some sensible reasons why people don’t use it in the end.

Following the definition, “formal methods are a particular kind of mathematically based techniques for the specification, development and verification of software and hardware systems.” Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM, Intel, and AMD. Furthermore, “many security vulnerabilities hide within modes of the design not exercised by typical system usage and go undetected during traditional functional verification. Thus, a successful PUF security verification requires a shift in thinking. Current methodologies being deployed for security verification are simulation-based verification, manual design review, formal verification methods, and penetration testing.” Riding on the trend of hardware security evaluation moving away from the malicious IC logic such as hardware Trojan detection and turning towards trustworthy HRoT development, the use of formal methods during the pre-fabrication phase of PUF development has the potential to exclude out every possible security vulnerability in the PUF design and consequently it has emerged as a promising solution.

Both manual design review and penetration testing are two popular strategies for system security analysis. “Manual review of the design architecture and code review to enumerate and prioritize threats to the system and identify potential vulnerabilities in the implementation require engineers with both hardware design, verification, and security expertise”. It takes times for companies to recruit members equipped with those knowledges. Similarly, “manual design review is not scalable, nor is it complete, and must be supplemented by other tool-aided verification methods such as simulation-based functional testing or formal analysis in order for it to yield any meaningful results on modern designs.” In contrast, penetration testing adopts the “red team” (i.e., playing the role of a potential attacker and perform penetration testing on the product in order to highlight the vulnerability areas) and “blue team” (i.e., playing the role who will harden the design against possible red team attacks) concept. Two teams carry out strategies of system security analysis from different angles, ensuring a comprehensive outcome. “Penetration testing is typically done post-silicon to better mimic the conditions under which an attacker will attempt to infiltrate the system, meaning that any vulnerabilities discovered will have to be mitigated with software or firmware to avoid a costly silicon re-spin. As with pre-silicon manual design review, penetration testing is an important component in hardware security verification. However, it requires engineers with specialized knowledge rarely available within most organizations.” Therefore, both manual design review and penetration testing are not good choices for PUF security verification.

Due to the limitation of incomplete design coverage and the inability to detect more complex indirect leakages through timing side-channels by simulation-based verification, formal verification has emerged as an alternative to simulation-based testing for PUF security verification [9]. In this regard, “formal verification techniques reason about properties on an abstract model of the system for all possible inputs rather than running a large number of tests. As such, if a security vulnerability violates a property verified using formal analysis, it is guaranteed to be discovered.” However, the obvious shortcomings with formal verification are their limitations by the size of the design that can be analyzed, and the set of simplifying assumptions which must be made to make verification of larger designs tractable. Therefore, when the PUF design size is small enough, formal methods have the potential of excluding out every security vulnerability in the designed system. Nevertheless, “as the design size increases, it becomes difficult to provide the same level of security guarantees, requiring significant manual effort to correctly model the design at an abstract enough level to draw meaningful conclusions without too many assumptions”

There are many reasons why people don’t use formal methods [9, 10]. They include the following:

  1. Formal Methods (FM) is such a niche that there is a lack of familiarity with verification tools among researchers and academics working on hardware/software security architectures.
  2. “Existing security verification approaches are time consuming, hard to measure, and are often a burden to engineering schedules.”
  3. “Formal verification may be appropriate as part of a larger strategy, but it does not solve the larger security problem crossing hardware and software.”
  4. There is still no consensus on the terms and scope of FM. Per [10], there are two domains in FM: formal specification is the study of how we write precise, unambiguous specifications, and formal verification is the study of how we prove things are correct. It includes both code verification (CV) and design verification (DV), and similarly divide specification into CS and DS.
  5. Verifying code is a hard problem and it will probably remain a specialist thing for the foreseeable future.
  6. “Verifying designs is much easier, but has cultural barriers to adoption. Although automated testing and code review have already gone mainstream, code contracts still remain as a niche thing to do.”

Conclusion

It is important to provide high-level security assurance in PUF by ensuring:

1) PUF itself is built securely and 2) PUF is securely integrated in a larger system which is fundamental to ensure system security.

Formal methods (FM) can be used to significantly improve the security verification of PUF and raise the confidence level in PUF security assurance. However, there are many reasons why people don’t use formal methods. Also, the- state-of-the-art security verification remains in designs that are only tested with certain security property test cases, which are not exhaustive, and architectures whose security properties lack formal specifications. Much more has to be done before organizations all take notice and deploy such robust and rigid examination of their work.

To combat the drawbacks related to the traditional security verification techniques including FM, Tortuga Logic claimed that they provide security verification products that detect and prevent hardware security vulnerabilities within the existing chip verification strategies. This is possible due to their patented technology capable of detecting unexpected and unidentifiable information flows in the system during functional simulation. Although this sounds quite promising, the jury is still out to see whether Tortuga Logic really deliver what they claimed to do.

Reference

  1. https://en.wikipedia.org/wiki/Formal_methods
  2. https://eprint.iacr.org/2016/846.pdf
  3. Yier, Jin “Introduction to Hardware Security”, Electronics 2015, 4, 763-784; doi:10.3390/electronics4040763
  4. 4th gen core family desktop specification update (2016), http://www.intel.com/content/dam/www/public/us/en/documents/specificationupdates/4th-gen-core-family-desktop-specification-update.pdf
  5. Trustzone, A.: Trustzone information page. Tech. rep. (2016), http://www.arm.com/products/processors/technologies/trustzone/
  6. McKeen, F., Alexandrovich, I., Berenzon, A., Rozas, C.V., Shafi, H., Shanbhogue, V., Savagaonkar, U.R.: Innovative instructions and software model for isolated execution. In: Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy. ACM (2013)
  7. AMD: AMD Memory Encryption (2016),http://amd dev.wpengine.netdnacdn.com/wordpress/media/2013/12/AMD Memory Encryption Whitepaper v7Public.pdf, accessed May 2016
  8. UNIQUE Report 238811/D3.2/V.1.0 “New Methodologies for Security Evaluation”, 2011-05-31
  9. https://www.tortugalogic.com/hrot-whitepaper/
  10. https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

Share:

Related Posts

The Challenge of Automotive Hardware Security Deployment
Securing System-on-Chips: Hardware Protection in the Age of Chiplets
Safeguarding the Arm Ecosystem with PSA Certified PUF-based Crypto Coprocessor