Skip to Main content Skip to Navigation
Theses

Deep Learning in Adversarial Context

Abstract : This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve the performance of adversarial attacks in the aspect of speed, magnitude of distortion, and invisibility. We contribute by defining invisibility with smoothness and integrating it into the optimization of producing adversarial examples. We succeed in creating smooth adversarial perturbations with less magnitude of distortion. To improve the efficiency of producing adversarial examples, we propose an optimization algorithm, i.e. Boundary Projection (BP) attack, based on the knowledge of the adversarial problem. BP attack searches against the gradient of the network to lead to misclassification when the current solution is not adversarial. It searches along the boundary to minimize the distortion when the current solution is adversarial. BP succeeds to generate adversarial examples with low distortion efficiently. Moreover, we also study the defenses. We apply patch replacement on both images and features. It removes the adversarial effects by replacing the input patches with the most similar patches of training data. Experiments show patch replacement is cheap and robust against adversarial attacks.
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03447254
Contributor : Abes Star :  Contact
Submitted on : Wednesday, November 24, 2021 - 5:04:13 PM
Last modification on : Friday, November 26, 2021 - 3:13:07 AM

File

2021ENSR0028_ZHANG_Hanwei_Thes...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03447254, version 1

Citation

Hanwei Zhang. Deep Learning in Adversarial Context. Neural and Evolutionary Computing [cs.NE]. École normale supérieure de Rennes, 2021. English. ⟨NNT : 2021ENSR0028⟩. ⟨tel-03447254⟩

Share

Metrics

Record views

40

Files downloads

39