Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix adversarial training #23

Open
rithik83 opened this issue Aug 30, 2024 · 0 comments
Open

Fix adversarial training #23

rithik83 opened this issue Aug 30, 2024 · 0 comments
Assignees

Comments

@rithik83
Copy link
Collaborator

To generate an adversarial example given a model and a clean example, gradient-based techniques generally move along the loss gradients of the clean example trying to maximise loss.

in my old adv-training implementation, I notice I generate a batch of adversarial examples by perturbing a batch of clean examples. My way tended to aggregate the cross-entropy loss of all the examples in the batch and used that loss for all the points, instead of each point's own CE loss to perturb it. No wonder pure AT did not work in my thesis, elementary error

The fix is manually perturbing each clean example in the batch on its own, computing and using CE loss for that point alone.

Not sure if my word salad here is comprehensible but hey

@rithik83 rithik83 self-assigned this Aug 30, 2024
@rithik83 rithik83 mentioned this issue Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant