On the Scaling of Machine Learning Attacks on PUFs with Application to Noise Bifurcation

Johannes Tobisch, Georg T. Becker

11th Workshop on RFID Security (RFIDSec 2015), New York, USA, June 23-24, 2015


Physical Unclonable Functions (PUFs) are seen as a promising alternative to traditional cryptographic algorithms for secure and lightweight device authentication. However, most strong PUF proposals can be attacked using machine learning algorithms in which a precise software model of the PUF is determined. One of the most popular strong PUFs is the XOR Arbiter PUF. In this paper we examine the machine learning resistance of the XOR Arbiter PUF by replicating the attack by Rührmaier et al. from CCS 2010. Using a more efficient implementation we are able to confirm the predicted exponential increase in needed number of responses for increasing XORs. However, our results show that the machine learning performance does not only depend on the PUF design and and the number of used response bits, but also on the specific PUF instance under attack. This is an important observation for machine learning attacks on PUFs in general. This instance-dependent behavior makes it difficult to determine precise lower bounds of the required number of challenge and response pairs (CRPs) and hence such numbers should always be treated with caution. Furthermore, we examine a machine learning countermeasure called noise bifurcation that was recently introduced at HOST 2014. In noise ifurcation, the machine learning resistance of XOR Arbiter PUFs is increased at the cost of using more responses during the authentication process. However, noise bifurcation has a much lower impact on the machine learning resistance than stated. In particular, we were able to attack all of the PUF designs that could not be attacked in the HOST paper.


tags: Arbiter PUF, Machine Learning Attacks, Noise-Bifurcation, Physical Unclonable Function