Physically unclonable functions (PUFs) have been widely proposed as hardware primitives for device identification and cryptographic key generation. While strong PUFs provide an exponential number of challenge–response pairs (CRPs) for… Click to show full abstract
Physically unclonable functions (PUFs) have been widely proposed as hardware primitives for device identification and cryptographic key generation. While strong PUFs provide an exponential number of challenge–response pairs (CRPs) for device authentication, most are vulnerable to machine-learning (ML)-based modeling attacks. Regarding the inherent hard learning problem, adversarial-based PUFs pseudo-randomly switch between several predefined internal states to poison the ML algorithms. However, existing adversarial-based PUFs sacrifice reliability or security against ML attacks. Furthermore, most works only evaluated the ML resistance by empirical experiments, which provides limited evidence for ML resistance. This work explored a novel adversarial-based PUF, Configurable Poisoning PUF (CP PUF), that simultaneously maintains reliability and protects the internal state by a reliability trapdoor. We then connect the proposed CP PUF with the learning parity with noise (LPN) problem to provide a theoretical security guarantee. Experiments show that the reliability of our proposed CP PUF keeps at least 88.9% and outperforms the existing adversarial-based PUFs under different levels of noisy environments. Moreover, the proposed CP PUF achieved a 50.16% accuracy under empirical ML attacks, a near-optimal performance.
               
Click one of the above tabs to view related content.