Skip to Main content Skip to Navigation
New interface
Theses

Software protections for artificial neural networks

Linda Guiga 1, 2 
2 SSH - Secure and Safe Hardware
LTCI - Laboratoire Traitement et Communication de l'Information
Abstract : In a context where Neural Networks (NNs) are very present in our daily lives, be it through smartphones, face and biometrics recognition or even in the medical field, their security is of the utmost importance. If such models leak information, not only could it imperil the privacy of sensitive data, but it could also infringe on intellectual property.Selecting the correct architecture and training the corresponding parameters is time-consuming -- it can take months -- and requires large computational resources. This is why an NN constitutes intellectual property. Moreover, once a malicious user knows the architecture and/or the parameters, multiple attacks can be carried out, such as adversarial ones. Adversarial attackers craft a malicious datapoint by adding a small noise to the original input, such that the noise is undetectable to the human eye but fools the model. Such attacks could be the basis of impersonations. Membership attacks, which aim at leaking information about the training dataset, are also facilitated by the knowledge of a model. More generally, when a malicious user has access to a model, she also has access to the manifold of the model's outputs, making it easier for her to fool the model.Protecting NNs is therefore paramount. However, since 2016, they have been the target of increasingly powerful reverse-engineering attacks. Mathematical reverse-engineering attacks solve equations or study a model's internal structure to reveal its parameters. On the other hand, side-channel attacks exploit leaks in a model's implementation -- such as in the cache or power consumption -- to uncover the parameters and architecture. In this thesis, we seek to protect NN models by changing their internal structure and their software implementation.To this aim, we propose four novel countermeasures. In the first three, we consider a gray-box context where the attacker has partial access to the model, and we leverage parasitic models to counter three types of attacks.We first tackle mathematical attacks that recover a model's parameters based on its internal structure. We propose to add one -- or multiple -- parasitic Convolutional Neural Networks (CNNs) at various locations in the base model and measure the incurred change in the structure by observing the modification in generated adversarial samples.However, the previous method does not thwart side-channel attacks that extract the parameters through the analysis of power or electromagnetic consumption. To mitigate such attacks, we propose to add dynamism to the previous protocol. Instead of considering one -- or several -- fixed parasite(s), we incorporate different parasites at each run, at the entrance of the base model. This enables us to hide a model's input, necessary for precise weight extraction. We show the impact of this dynamic incorporation through two simulated attacks.Along the way, we observe that parasitic models affect adversarial examples. Our third contribution is derived from this, as we suggest a novel method to mitigate adversarial attacks. To this effect, we dynamically incorporate another type of parasite: autoencoders. We demonstrate the efficiency of this countermeasure against common adversarial attacks.In a second part, we focus on a black-box context where the attacker knows neither the architecture nor the parameters. Architecture extraction attacks rely on the sequential execution of NNs. The fourth and last contribution we present in this thesis consists in reordering neuron computations. We propose to compute neuron values by blocks in a depth-first fashion, and add randomness to this execution. We prove that this new way of carrying out CNN computations prevents a potential attacker from recovering a small enough set of possible architectures for the initial model.
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03715693
Contributor : ABES STAR :  Contact
Submitted on : Wednesday, July 6, 2022 - 4:24:16 PM
Last modification on : Thursday, July 7, 2022 - 5:13:50 PM

File

121616_GUIGA_2022_archivage.pd...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03715693, version 1

Collections

Citation

Linda Guiga. Software protections for artificial neural networks. Cryptography and Security [cs.CR]. Institut Polytechnique de Paris, 2022. English. ⟨NNT : 2022IPPAT024⟩. ⟨tel-03715693⟩

Share

Metrics

Record views

224

Files downloads

60