PUFs create numbers from the fine-grained randomness found in in an individual IC, due to doping and manufacturing variations, for example.
In this case, variations in the carrier transport of graphene transistors was used, which follow Gaussian random distributions, according to Penn State University, where the PUF was made.
The team first fabricated nearly 2,000 graphene transistors, intended to be identical, but with variations due to differences in Dirac voltage, Dirac conductance and carrier mobility.
“After the graphene transistors were implemented into PUFs, the researchers modelled their characteristics to create a simulation of 64 million graphene-based PUFs,” according to the university.
Machine learning was used train an artificial intelligence on the graphene PUF simulation data, testing to see if the AI could use this training to make predictions about the encrypted data and reveal system insecurities.
“Neural networks are very good at developing a model from a huge amount of data, even if humans are unable to,” said researcher Saptarshi Das. “We found that AI could not develop a model, and it was not possible for the encryption process to be learned.”
This suggests, said Das, that potential hackers could not use breached data to reverse engineer a device for future exploitation, and that, even if the key could be predicted, the graphene PUF could generate a new key through reconfiguration without additional hardware or replacement of components.
“Normally, once a system’s security has been compromised, it is permanently compromised,” explained fellow researcher Akhil Dodda. “We developed a scheme where such a compromised system could be reconfigured and used again, adding tamper resistance as another security feature.”
The work is published in Nature Electronics as ‘Graphene-based physically unclonable functions that are reconfigurable and resilient to machine learning attacks‘.