Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.

qilong-zhang, updated 🕥 2022-03-13 08:30:12

Patch-wise Iterative Attack (accpeted by ECCV2020)

This is the Tensorflow code for our paper Patch-wise Attack for Fooling Deep Neural Network, and Pytorch version can be found at here.

In our paper, we propose a novel Patch-wise Iterative Method by using the amplification factor and guiding gradient to its feasible direction. Comparing with state-of-the-art attacks, we further improve the success rate by 3.7\% for normally trained models and 9.1\% for defense models on average. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods.

In targeted attack case, we extend our Patch-wise iterative method to Patch-wise++ iterative method. More details can be found from here.

Implementation

python python project_iter_attack.py

  • The output images are in "output/"

Results

result

Citing this work

If you find this work is useful in your research, please consider citing:

@inproceedings{GaoZhang2020PatchWise, author = {Lianli Gao and Qilong Zhang and Jingkuan Song and Xianglong Liu and Heng Tao Shen}, title = {Patch-Wise Attack for Fooling Deep Neural Network}, Booktitle = {European Conference on Computer Vision}, year = {2020} }

Qilong Zhang
GitHub Repository

adversarial-attacks adversarial-machine-learning adversarial-examples