Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview)

Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference [paper]

Baseline of this code is the official repository for this paper. We just replace the BNN regularizer from ELBO with enhanced Bayesian regularizer based on hierarchical-ELBO.

Alt text


Citation

If you find this work helpful, please cite it as:

@misc{
lee2021towards,
title={Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference},
author={Byung-Kwan Lee and Youngjoon Yu and Yong Man Ro},
year={2021},
url={https://openreview.net/forum?id=Cue2ZEBf12}
}

Hierarchical-Bayeisan-Defense

Dataset

  • CIFAR10
  • STL10
  • CIFAR100
  • Tiny-ImageNet

Network

  • VGG16 (for CIFAR-10/CIFAR-100/Tiny-ImageNet)
  • Aaron (for STL10)
  • WideResNet (for CIFAR-10/100)

Attack (by torchattack)

  • PGD attack
  • EOT-PGD attack

Defense methods

  • adv: Adversarial training
  • adv_vi: Adversarial training with Bayesian neural network
  • adv_hvi: Adversarial training with Enhanced Bayesian neural network based on hierarchical-ELBO

How to Train

1. Adversarial training

Run train_adv.sh

lr=0.01
steps=10
max_norm=0.03
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python ./main_adv.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

2. Adversarial training with BNN

Run train_adv_vi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_vi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_vi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

3. Adversarial training with enhanced Bayesian regularizer based on hierarchical-ELBO

Run train_adv_hvi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_hvi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_hvi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

How to Test

Testing adversarial robustness

Run acc_under_attack.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`, `adv`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 acc_under_attack.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm

How to check the learning parameters and KL divergence

Run check_parameters.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 check_parameters.py \
    --model $model \
    --defense $defense \
    --data $data \
    --max_norm $max_norm \

How to check uncertainty by predictive entropy

Run uncertainty.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 uncertainty.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm
Similar Resources

House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

 House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

Sep 15, 2022

【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Sep 13, 2022

Pytorch Implementation of paper "Noisy Natural Gradient as Variational Inference"

Pytorch Implementation of paper

Noisy Natural Gradient as Variational Inference PyTorch implementation of Noisy Natural Gradient as Variational Inference. Requirements Python 3 Pytor

Aug 15, 2022

TensorFlow implementation of "Variational Inference with Normalizing Flows"

TensorFlow implementation of

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

Jun 8, 2022

Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

Aug 26, 2022

Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Aug 16, 2022

Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Pytorch implementation for

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Sep 7, 2022

Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Jun 10, 2022

Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

Sep 19, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

Aug 17, 2022
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

Sep 20, 2022
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

Sep 11, 2022
HNECV: Heterogeneous Network Embedding via Cloud model and Variational inference
HNECV: Heterogeneous Network Embedding via Cloud model and Variational inference

HNECV This repository provides a reference implementation of HNECV as described in the paper: HNECV: Heterogeneous Network Embedding via Cloud model a

Jun 28, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Feb 21, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Aug 3, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

May 4, 2022
Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

Sep 13, 2022