ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
-
Updated
May 28, 2024
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
A curated list of trustworthy deep learning papers. Daily updating...
The Security Toolkit for LLM Interactions
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
Official implementation of "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models"
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
👀🛡️ Code for the paper “Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness” by Emanuele Ballarin, Alessio Ansuini and Luca Bortolussi (2024)
Birhanu Eshete is an Associate Professor of Computer Science at the University of Michigan, Dearborn. His main research focus is in trustworthy machine learning with emphasis on security, safety, privacy, interpretability, fairness, and the dynamics thereof. He also studies online cybercrime and advanced and persistent threats (APTs).
Evaluating adversarial machine learning attacks in network intrusion detection systems.
Generate adversarial patches against YOLOv5 🚀
Machine Learning Attack Series
A Python library for Secure and Explainable Machine Learning
A curated list of useful resources that cover Offensive AI.
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
YOLO Multi-Object Color Attack (YMCA) is an adversarial attack created by Christian Cipolletta as part of Rowan University's Engineering Clinic.
A curated list of academic events on AI Security & Privacy
Detection of IoT devices infected by malwares from their network communications, using federated machine learning
Add a description, image, and links to the adversarial-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-machine-learning topic, visit your repo's landing page and select "manage topics."