Skip to content
#

pgd-attack

Here are 13 public repositories matching this topic...

Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.

  • Updated Apr 27, 2025
  • Python

Improve this page

Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."

Learn more

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy