3.1.1. Adversarial Example

PX Pengfei Xie
SS Shuhao Shi
SY Shuai Yang
KQ Kai Qiao
NL Ningning Liang
LW Linyuan Wang
JC Jian Chen
GH Guoen Hu
BY Bin Yan
ask Ask a question
Favorite

Suppose x is a clean sample, ytrue is the corresponding real label. For a trained DNN F1, it can correctly classify samples x as labels ytrue. By adding a small perturbation δ to the original sample, the adversarial examples x + δ can make the DNN F1 misclassified. The generation of the small perturbation is generally obtained by maximizing the loss function J(x, ytrue, θ), where θ represents the network structure parameters, and the loss function generally selects the cross entropy loss function.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A