Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
class:gradsec2026 [2026/03/15 23:28]
mhshin
class:gradsec2026 [2026/03/30 13:59] (current)
jhj2004 [Agenda]
Line 33: Line 33:
  
 ^ Date ^ Name ^  Topic  ^ Slides ^ Minutes ^ ^ Date ^ Name ^  Topic  ^ Slides ^ Minutes ^
-| 3/4 | Minho | Ice-breaking ​[[https://drive.google.com/​file/​d/​1PGW7cKv0rqTp6jIaHmIz2Olmi9KXCCNn/​view?​usp=drive_link|AI-Cybersecurity]] | [[https://​drive.google.com/​file/​d/​1tQJK6mbAowQlCto7OzYd16rR8mlTkkiL/​view?​usp=drive_linkSurvey paper]] ​|+| 3/4 | Minho | AI-Introduction ​{{ :class:​ai-intro.pdf |AI-Intro}} ​ |
 | 3/11 | Minho |  |  |  | | 3/11 | Minho |  |  |  |
-| ::: | Cho | https://​www.usenix.org/​system/​files/​sec21-schuster.pdf| ​ |  |+| ::: | Cho | [[https://​www.usenix.org/​system/​files/​sec21-schuster.pdf|You autocomplete me: Poisoning vulnerabilities in neural code completion]] | [[https://​1drv.ms/​p/​c/​005794ae9195628e/​IQB4fo_zfZeySKirBSMijjfiAVbNdg_9N1hiWS702-MyQpk?​e=SsGxwB|You autocomplete me: Poisoning vulnerabilities in neural code completion]] ​|  |
 | 3/18 | Minho |  |  |  | | 3/18 | Minho |  |  |  |
-| ::: | Han |  |  |  | +| ::: | Han | [[https://​arxiv.org/​pdf/​2102.07995.pdf|D2a:​ A dataset built for ai-based vulnerability detection methods using differential analysis]] ​|  ​|  | 
-| 3/25 | Minho |  |  |  | +| 3/27 | Minho |  |  |  | 
-| ::: | Kwak|  |  |  | +| ::: | Kwak| [[https://​www.mdpi.com/​1424-8220/​23/​9/​4403/​pdf|A Deep Learning-Based Innovative Technique for Phishing Detection with URLs]] ​|  |  | 
-| 4/1 | Cho |  |  |  | +| 4/1 | No Class |  |  |  | 
-| 4/No Class  |  |  |+| 4/10 Cho [[https://​arxiv.org/​pdf/​1803.04173|Adversarial Malware Binaries: Evading Deep 
 +Learning for Malware Detection in Executables]] ​|  |  |
 | 4/15 | Han |  |  |  | | 4/15 | Han |  |  |  |
-| 4/22 No Class |  |  |  | +| 4/24 Kwak|  |  |  | 
-| 4/29 | Kwak |  |  |  | +| 4/29 | Cho |  |  |  | 
-| 5/6 | Cho |  |  |  | +| 5/6 | Han |  |  |  | 
-| 5/13 | Han |  |  |  | +| 5/13 | Kwak |  |  |  | 
-| 5/20 | Kwak |  |  |  | +| 5/20 | Cho |  |  |  | 
-| 5/27 | Cho |  |  |  | +| 5/27 | Han |  |  |  | 
-| 6/3 | Han |  |  |  | +| 6/3 | Kwak |  |  |  | 
-| 6/10 | Kwak |  |  |  | +| 6/10 | Cho |  |  |  | 
-| 6/17 | Cho |  |  |  | +| 6/17 | Han |  |  |  | 
-| 6/24 | Han |  |  |  |+| 6/24 | Kwak |  |  |  |
 ====== Class Information ====== ====== Class Information ======
  
Line 85: Line 86:
  
 ====== Reading List for LLM-based Cybersecurity ====== ====== Reading List for LLM-based Cybersecurity ======
 +
 +# AI Security Course - Research Paper List (2020+)
 +# Papers with freely accessible PDFs (72 papers)
  
 ==== C1. Adversarial Machine Learning ==== ==== C1. Adversarial Machine Learning ====
- 
-  - **Explaining and Harnessing Adversarial Examples** 
-    * Ian Goodfellow, Jonathon Shlens, Christian Szegedy, ICLR 2015 | Pages: 11 | Difficulty: 2/5 
-    * Abstract: This seminal paper introduces the Fast Gradient Sign Method (FGSM) and demonstrates that neural networks are vulnerable to adversarial examples - inputs with imperceptible perturbations that cause misclassification. The authors show that adversarial examples transfer across models and propose that linearity in high-dimensional spaces is the primary cause of vulnerability,​ challenging previous hypotheses about model overfitting. 
-  - **Towards Evaluating the Robustness of Neural Networks** 
-    * Nicholas Carlini, David Wagner, IEEE S&P 2017 | Pages: 16 | Difficulty: 3/5 
-    * Abstract: This paper presents the powerful C&W attack, demonstrating that defensive distillation and other defenses can be bypassed. The authors formulate adversarial example generation as an optimization problem and introduce targeted attacks that achieve near-perfect success rates. They establish important evaluation methodology for measuring model robustness and show that many claimed defenses provide false security. 
-  - **Intriguing Properties of Neural Networks** 
-    * Christian Szegedy et al., ICLR 2014 | Pages: 10 | Difficulty: 3/5 
-    * Abstract: The first paper to formally identify adversarial examples in deep neural networks. The authors demonstrate that small, carefully crafted perturbations can fool state-of-the-art models and that these adversarial examples transfer between different models. They introduce the L-BFGS attack method and show that adversarial examples reveal fundamental properties of neural network decision boundaries rather than being mere artifacts of overfitting. 
-  - **DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks** 
-    * Seyed-Mohsen Moosavi-Dezfooli et al., CVPR 2016 | Pages: 9 | Difficulty: 3/5 
-    * Abstract: This paper introduces DeepFool, an efficient algorithm to compute minimal adversarial perturbations. Unlike FGSM which produces large perturbations,​ DeepFool iteratively linearizes the classifier to find the closest decision boundary. The method provides a way to measure model robustness quantitatively and demonstrates that different architectures have varying levels of robustness to adversarial perturbations. 
-  - **Universal Adversarial Perturbations** 
-    * Seyed-Mohsen Moosavi-Dezfooli et al., CVPR 2017 | Pages: 10 | Difficulty: 3/5 
-    * Abstract: This paper demonstrates the existence of universal perturbations - single perturbations that can fool a classifier on most inputs from a dataset. These image-agnostic perturbations reveal fundamental geometric properties of decision boundaries and challenge the notion that adversarial examples are input-specific artifacts. The work shows that universal perturbations transfer across different models trained on the same task. 
   - **Adversarial Examples Are Not Bugs, They Are Features**   - **Adversarial Examples Are Not Bugs, They Are Features**
     * Andrew Ilyas et al., NeurIPS 2019 | Pages: 25 | Difficulty: 3/5     * Andrew Ilyas et al., NeurIPS 2019 | Pages: 25 | Difficulty: 3/5
-    * Abstract: This influential paper argues that adversarial vulnerability arises from models relying on highly predictive but non-robust features in the data. The authors demonstrate that models trained only on adversarial examples can achieve good accuracy on clean data, showing that adversarial examples exploit genuine patterns. ​This challenges the view of adversarial examples ​as bugs and suggests they reveal fundamental properties ​of standard ​machine learning+    * Abstract: This influential paper argues that adversarial vulnerability arises from models relying on highly predictive but non-robust features in the data. The authors demonstrate that models trained only on adversarial examples can achieve good accuracy on clean data, showing that adversarial examples exploit genuine patterns ​rather than being bugs in model design. 
-  - **Adversarial ​Patch** +    * Keywords: Deep learning, ​adversarial examples, robust features, neural networks, gradient-based attacks, image classification 
-    * Tom Brown et al., NIPS 2017 Workshop ​| Pages: ​| Difficulty: ​2/5 +    * URL: https://​arxiv.org/​pdf/​1905.02175.pdf 
-    * Abstract: ​This paper introduces ​adversarial ​patches ​printablephysical perturbations ​that can fool classifiers in the real worldUnlike digital perturbationspatches are robust to viewing angledistanceand lighting conditionsThe authors demonstrate attacks where a small sticker can cause an image classifier to ignore everything else in the sceneraising serious concerns ​for real-world ML deployment in security-critical applications+  - **Reliable Evaluation ​of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks** 
-  - **Threat of Adversarial ​Attacks on Deep Learning in Computer VisionA Survey** +    * Francesco Croce, Matthias Hein, ICML 2020 | Pages: 32 | Difficulty: 3/5 
-    * Naveed AkhtarAjmal MianIEEE Access 2018 | Pages: ​31 | Difficulty: ​1/5 (Survey) +    * Abstract: Introduces AutoAttack, an ensemble of parameter-free attacks for robust evaluation of adversarial defenses. The paper reveals that many published defenses overestimate their robustness due to weak evaluation methods. AutoAttack has become the standard ​benchmark for evaluating adversarial robustness in the research community. 
-    * Abstract: ​A comprehensive survey covering ​adversarial attacks ​and defenses in computer visionThe paper categorizes ​attacks ​based on adversary knowledgeattack specificityand attack frequencyIt reviews major attack methods (FGSMC&WDeepFool) ​and defense strategies (adversarial ​trainingdefensive distillationgradient masking)An excellent entry-level resource ​for understanding ​the adversarial ​ML landscape.+    * Keywords: Adversarial attacks, robustness evaluation, ensemble methods, PGD, gradient-based optimization,​ AutoAttack 
 +    * URL: https://​arxiv.org/​pdf/​2003.01690.pdf 
 +  - **On Adaptive Attacks to Adversarial ​Example Defenses** 
 +    * Florian Tramer ​et al., NeurIPS 2020 | Pages: ​13 | Difficulty: ​4/5 
 +    * Abstract: ​Provides comprehensive guidelines for properly evaluating ​adversarial ​defenses against adaptive attacks. Shows that many defenses fail when attackers adapt their strategies. Introduces systematic methodology for creating adaptive attacks and demonstrates failures of several published defenses that claimed robustness. 
 +    * Keywords: Adversarial defenses, adaptive attacks, security evaluation, gradient obfuscation,​ defense mechanisms 
 +    * URL: https://​arxiv.org/​pdf/​2002.08347.pdf 
 +  ​**Improving Adversarial Robustness Requires Revisiting Misclassified Examples** 
 +    * Yisen Wang et al.ICLR 2020 | Pages: 23 | Difficulty: 3/5 
 +    * Abstract: Proposes misclassification aware adversarial training (MART) ​that explicitly differentiates between correctly and incorrectly classified examples during training. Shows that focusing on misclassified examples significantly improves robustness. Achieves state-of-the-art results on CIFAR-10 and demonstrates better generalization. 
 +    * Keywords: Adversarial trainingmisclassificationrobustness improvementneural networks, CIFAR-10 
 +    * URL: https://​openreview.net/​pdf?​id=rklOg6EFwS 
 +  - **Uncovering ​the Limits of Adversarial Training against Norm-Bounded Adversarial Examples** 
 +    * Sven Gowal et al.arXiv 2020 | Pages: 18 | Difficulty: 4/5 
 +    * Abstract: Investigates the fundamental limits of adversarial training ​for norm-bounded attacks. Achieves state-of-the-art robustness through extensive hyperparameter tuning and architectural choicesDemonstrates that with sufficient model capacity and proper training procedures, adversarial training can achieve significantly better robustness. 
 +    * Keywords: Adversarial training, WideResNet, data augmentation,​ model capacity, robustness limits 
 +    * URL: https://​arxiv.org/​pdf/​2010.03593.pdf 
 +  - **Perceptual ​Adversarial ​RobustnessDefense Against Unseen Threat Models** 
 +    * Cassidy LaidlawSahil SinglaSoheil Feizi, ICLR 2021 | Pages: ​23 | Difficulty: ​4/5 
 +    * Abstract: ​Introduces perceptual adversarial training (PAT) that defends against a diverse set of adversarial attacks ​by optimizing against perceptually-aligned perturbationsShows that models trained with PAT are robust to attacks ​beyond the threat model considered during trainingaddressing the limitation of traditional adversarial training. 
 +    * Keywords: Adversarial robustnessperceptual metrics, threat models, adversarial training, LPIPS distance 
 +    * URL: https://​arxiv.org/​pdf/​2006.12655.pdf 
 +  - **RobustBench:​ A Standardized Adversarial Robustness Benchmark** 
 +    * Francesco Croce et al., NeurIPS Datasets 2021 | Pages: 22 | Difficulty: 2/5 
 +    * Abstract: Presents RobustBencha standardized benchmark for evaluating adversarial robustness with a continuously updated leaderboard. Addresses the problem of inconsistent evaluation practices across papers by providing standardized evaluation protocols ​and maintaining an up-to-date repository of state-of-the-art robust models. 
 +    * Keywords: Benchmarking, ​adversarial ​robustnessstandardizationAutoAttack, model evaluation, leaderboards 
 +    * URL: https://​arxiv.org/​pdf/​2010.09670.pdf 
 +  ​**Adversarial Training ​for Free!** 
 +    * Ali Shafahi et al., NeurIPS 2019 | Pages: 11 | Difficulty: 3/5 
 +    * Abstract: Proposes "​free"​ adversarial training that achieves similar robustness to standard adversarial training with almost no additional computational cost. The method recycles gradient information computed during ​the backward pass to generate ​adversarial ​examples, making adversarial training practical for large models. 
 +    * Keywords: Adversarial training, computational efficiency, gradient recycling, neural networks, optimization 
 +    * URL: https://​arxiv.org/​pdf/​1904.12843.pdf
  
------- +====C2. Model Poisoning & Backdoor Attacks ====
- +
-==== C2. Model Poisoning & Backdoor Attacks ==== +
- +
-  - <fc red>​(Jo)</​fc>​ **You autocomplete me: Poisoning vulnerabilities in neural code completion** +
-    * Schuster et al +
-    * Abstract: This paper demonstrates that neural code autocompleters are vulnerable to data and model poisoning attacks where attackers can inject specially-crafted files into training corpora or fine-tune models to influence autocomplete suggestions toward insecure coding practices such as weak encryption modes or outdated SSL versions +
-  - **BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain** +
-    * Tianyu Gu et al., NIPS 2017 Workshop | Pages: 6 | Difficulty: 2/5 +
-    * Abstract: This pioneering work introduces backdoor attacks on neural networks where an attacker poisons training data with trigger patterns. The resulting model performs normally on clean inputs but misclassifies when the trigger is present. The authors demonstrate attacks on traffic sign recognition and face identification,​ showing that backdoored models are difficult to detect through standard accuracy testing. +
-  - **Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks** +
-    * Ali Shafahi et al., NeurIPS 2018 | Pages: 11 | Difficulty: 3/5 +
-    * Abstract: This paper introduces clean-label poisoning where poisoned training samples maintain correct labels, making attacks harder to detect. The authors craft imperceptible perturbations to training images that cause targeted misclassification. They use feature collision in the network'​s representation space to make the target input appear similar to a chosen class, demonstrating successful attacks on transfer learning scenarios. +
-  - **Trojaning Attack on Neural Networks** +
-    * Yingqi Liu et al., IEEE ICCD 2018 | Pages: 8 | Difficulty: 3/5 +
-    * Abstract: Presents a systematic approach to injecting hardware-based trojans in neural networks. Shows how attackers can manipulate model behavior through malicious hardware modifications. Demonstrates attacks that activate only under specific trigger conditions while maintaining normal behavior otherwise. +
-  - **Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks** +
-    * Bolun Wang et al., IEEE S&P 2019 | Pages: 15 | Difficulty: 3/5 +
-    * Abstract: Proposes the first defense mechanism specifically designed to detect and remove backdoors from neural networks. Uses optimization to reverse-engineer potential triggers and identifies anomalous patterns. Successfully detects backdoors with high accuracy and can remove them through fine-tuning or neuron pruning. +
-  - **Bypassing Backdoor Detection Algorithms in Deep Learning** +
-    * Te Juin Lester Tan, Reza Shokri, NeurIPS 2020 | Pages: 11 | Difficulty: 4/5 +
-    * Abstract: Demonstrates sophisticated backdoor attacks that evade state-of-the-art detection methods. Shows that adaptive attackers can craft triggers that appear natural and avoid detection by Neural Cleanse and similar defenses. Challenges the security of existing backdoor detection approaches. +
-  - **Backdoor Attacks Against Deep Learning Systems in the Physical World** +
-    * Emily Wenger et al., CVPR 2021 | Pages: 10 | Difficulty: 3/5 +
-    * Abstract: Extends backdoor attacks to the physical world using robust physical triggers. Demonstrates attacks on traffic sign recognition where physical stickers serve as backdoor triggers. Shows that backdoors can survive in real-world conditions with varying angles, distances, and lighting.+
   - **Blind Backdoors in Deep Learning Models**   - **Blind Backdoors in Deep Learning Models**
     * Eugene Bagdasaryan,​ Vitaly Shmatikov, USENIX Security 2021 | Pages: 18 | Difficulty: 4/5     * Eugene Bagdasaryan,​ Vitaly Shmatikov, USENIX Security 2021 | Pages: 18 | Difficulty: 4/5
-    * Abstract: Introduces blind backdoor attacks where the attacker doesn'​t need to control the training process. Shows how backdoors can be injected through model replacement or by poisoning only a small fraction of training data. Demonstrates attacks on federated learning and transfer learning scenarios.+    * Abstract: Introduces blind backdoor attacks where the attacker doesn'​t need to control the training process. Shows how backdoors can be injected through model replacement or by poisoning only a small fraction of training data. Demonstrates attacks on federated learning and transfer learning scenarios, raising concerns about supply chain security. 
 +    * Keywords: Backdoor attacks, federated learning, transfer learning, model poisoning, supply chain security 
 +    * URL: https://​arxiv.org/​pdf/​2005.03823.pdf
   - **WaNet: Imperceptible Warping-based Backdoor Attack**   - **WaNet: Imperceptible Warping-based Backdoor Attack**
     * Anh Nguyen et al., ICLR 2021 | Pages: 18 | Difficulty: 3/5     * Anh Nguyen et al., ICLR 2021 | Pages: 18 | Difficulty: 3/5
-    * Abstract: Proposes a novel backdoor attack using smooth warping transformations instead of visible patches. These backdoors are nearly imperceptible and harder to detect than traditional patch-based triggers. Demonstrates high attack success rates while evading multiple defense mechanisms.+    * Abstract: Proposes a novel backdoor attack using smooth warping transformations instead of visible patches ​as triggers. These backdoors are nearly imperceptible ​to human inspection ​and harder to detect than traditional patch-based triggers. Demonstrates high attack success rates while evading multiple ​state-of-the-art ​defense mechanisms. 
 +    * Keywords: Backdoor attacks, image warping, imperceptible perturbations,​ neural networks, trigger design 
 +    * URL: https://​arxiv.org/​pdf/​2102.10369.pdf
   - **Backdoor Learning: A Survey**   - **Backdoor Learning: A Survey**
-    * Yiming Li et al., arXiv 2022 | Pages: 45 | Difficulty: ​1/5 (Survey) +    * Yiming Li et al., IEEE TNNLS 2022 | Pages: 45 | Difficulty: ​2/5 
-    * Abstract: Comprehensive survey of backdoor attacks and defenses in deep learning. Categorizes attacks by trigger type, poisoning strategy, and attack scenario. Reviews detection and mitigation methods. Provides ​taxonomy and identifies open research challenges. +    * Abstract: Comprehensive survey of backdoor attacks and defenses in deep learning. Categorizes attacks by trigger type, poisoning strategy, and attack scenario. Reviews detection and mitigation methods, provides ​taxonomy ​of backdoor learning, ​and identifies open research challenges ​in this rapidly evolving field
- +    * Keywords: Survey paper, backdoor attacks, defense mechanisms, trigger patterns, neural network security 
-------+    * URL: https://​arxiv.org/​pdf/​2007.08745.pdf 
 +  ​**Rethinking the Backdoor Attacks'​ Triggers: A Frequency Perspective** 
 +    * Yi Zeng et al., ICCV 2021 | Pages: 10 | Difficulty: 3/5 
 +    * Abstract: Analyzes backdoor triggers from a frequency perspective and discovers that existing triggers predominantly contain high-frequency components. Proposes frequency-based backdoor attacks that are more stealthy and harder to detect. Shows that defenses effective against spatial-domain triggers fail against frequency-domain triggers. 
 +    * Keywords: Backdoor attacks, frequency analysis, Fourier transform, trigger design, stealth attacks 
 +    * URL: https://​arxiv.org/​pdf/​2104.03413.pdf 
 +  ​**Backdoor Attacks Against Deep Learning Systems in the Physical World** 
 +    * Emily Wenger et al., CVPR 2021 | Pages: 10 | Difficulty: 3/5 
 +    * Abstract: Extends backdoor attacks to the physical world using robust physical triggers that work across different viewing conditions. Demonstrates successful attacks on traffic sign recognition systems using physical stickers. Shows that backdoors can survive real-world conditions including varying angles, distances, and lighting. 
 +    * Keywords: Physical adversarial examples, backdoor attacks, computer vision, robust perturbations,​ physical-world attacks 
 +    * URL: https://​arxiv.org/​pdf/​2004.04692.pdf 
 +  - **Hidden Trigger Backdoor Attacks** 
 +    * Aniruddha Saha et al., AAAI 2020 | Pages: 8 | Difficulty: 3/5 
 +    * Abstract: Proposes backdoor attacks where triggers are hidden in the neural network'​s feature space rather than being visible patterns in the input. These attacks are harder to detect because there'​s no visible trigger pattern that can be identified through input inspection or trigger inversion techniques. 
 +    * Keywords: Backdoor attacks, hidden triggers, feature space, neural networks, detection evasion 
 +    * URL: https://​arxiv.org/​pdf/​1910.00033.pdf 
 +  - **Input-Aware Dynamic Backdoor Attack** 
 +    * Anh Nguyen, Anh Tran, NeurIPS 2020 | Pages: 11 | Difficulty: 4/5 
 +    * Abstract: Introduces dynamic backdoor attacks where the trigger pattern adapts to the input image, making detection more difficult. Unlike static triggers that use the same pattern for all images, dynamic triggers are input-specific and generated by a neural network, improving stealthiness and attack success rate. 
 +    * Keywords: Dynamic backdoor attacks, generative models, adaptive triggers, neural networks, attack stealthiness 
 +    * URL: https://​arxiv.org/​pdf/​2010.08138.pdf 
 +  - **Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks** 
 +    * Avi Schwarzschild et al., ICML 2021 | Pages: 21 | Difficulty: 3/5 
 +    * Abstract: Presents unified benchmark for evaluating data poisoning and backdoor attacks across different scenarios. Compares various attack methods under consistent settings and demonstrates that some attacks are significantly more effective than others. Provides standardized evaluation framework for future research and reveals many attacks fail in realistic settings. 
 +    * Keywords: Data poisoning, backdoor attacks, benchmarking,​ neural networks, attack evaluation, standardized testing 
 +    * URL: https://​arxiv.org/​pdf/​2006.12557.pdf
  
 ==== C3. Privacy Attacks on Machine Learning ==== ==== C3. Privacy Attacks on Machine Learning ====
- 
-  - **Membership Inference Attacks Against Machine Learning Models** 
-    * Reza Shokri et al., IEEE S&P 2017 | Pages: 15 | Difficulty: 3/5 
-    * Abstract: Introduces membership inference attacks where an attacker determines if a specific data point was in the training set. Demonstrates attacks on commercial ML services including Google Prediction API. Shows that overfitting makes models vulnerable and that confidence scores leak membership information. 
-  - **Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures** 
-    * Matt Fredrikson et al., CCS 2015 | Pages: 12 | Difficulty: 3/5 
-    * Abstract: Demonstrates model inversion attacks that reconstruct training data from model outputs. Shows successful reconstruction of facial images from face recognition models and recovery of sensitive attributes from genomic data predictors. Proposes confidence masking as a partial defense. 
   - **Extracting Training Data from Large Language Models**   - **Extracting Training Data from Large Language Models**
     * Nicholas Carlini et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5     * Nicholas Carlini et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5
-    * Abstract: ​Shows that large language models like GPT-2 memorize and can be made to emit verbatim training data including personal information. Demonstrates extraction of phone numbers, addresses, and copyrighted content. ​Raises ​serious privacy concerns for LLMs trained on web data. +    * Abstract: ​Demonstrates ​that large language models like GPT-2 memorize and can be made to emit verbatim training data including personal informationphone numbers, and copyrighted content. ​The paper raises ​serious privacy concerns for LLMs trained on web data and shows that model size correlates with memorization capability. 
-  - **The Secret SharerEvaluating ​and Testing Unintended Memorization in Neural Networks** +    * Keywords: LLMs, privacy attacks, data extraction, memorization,​ training data leakage, GPT-2 
-    * Nicholas Carlini ​et al., USENIX Security 2019 | Pages: 18 | Difficulty: 3/5 +    * URL: https://​arxiv.org/​pdf/​2012.07805.pdf 
-    * Abstract: ​Studies unintended memorization in neural networks, showing ​models ​can memorize rare or sensitive training examples. Proposes ​exposure metrics to quantify memorization ​and demonstrates ​extraction attacksShows that differential privacy ​provides limited protection against memorization+  - **UpdatedA Face Tells More Than Thousand Posts: Development ​and Validation of a Novel Model for Membership Inference Attacks Against Face Recognition Systems** 
-  - **Stealing Machine Learning Models via Prediction APIs** +    * Mahmood Sharif ​et al., IEEE S&P 2021 | Pages: 18 | Difficulty: 3/5 
-    * Florian Tramèr ​et al., USENIX Security 2016 | Pages: ​20 | Difficulty: 3/5 +    * Abstract: ​Develops improved membership inference attacks specifically for face recognition systems. Shows that face recognition ​models ​leak significantly more membership information than general image classifiers. Proposes ​defense mechanisms based on differential privacy ​and demonstrates ​their effectiveness. 
-    * Abstract: ​Demonstrates model extraction ​attacks ​where an attacker queries a black-box model to steal its functionality. Shows successful extraction of logistic regressionneural networks, and decision treesAnalyzes the cost-accuracy tradeoff and proposes ​defenses based on output perturbation+    * Keywords: Membership inference, face recognition,​ privacy attacks, biometric systems, ​differential privacy 
-  - **Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning** +    * URL: https://​arxiv.org/​pdf/​2011.11873.pdf 
-    * Briland Hitaj et al., CCS 2017 | Pages: ​14 | Difficulty: 4/5 +  - **Label-Only Membership Inference Attacks** 
-    * Abstract: ​Presents a novel attack on collaborative learning using GANs. Shows that an adversarial participant ​can use a GAN to reconstruct private training data from model updates ​in federated ​learning. ​Demonstrates attacks that recover recognizable images from gradient information+    * Christopher Choquette-Choo ​et al., ICML 2021 | Pages: ​22 | Difficulty: 3/5 
-  - **SoK: Privacy-Preserving Machine ​Learning** +    * Abstract: ​Proposes membership inference ​attacks ​that only require access ​to predicted labels, not confidence scores. Shows that even with minimal information leakageattackers can determine training set membershipDemonstrates that defenses ​designed for score-based attacks don't protect against label-only attacks. 
-    * Maria Rigaki, Sebastian Garcia, arXiv 2023 | Pages: ​38 | Difficulty: ​1/5 (Survey) +    * Keywords: Membership inference, label-only attacks, privacy leakage, machine learning privacy, black-box attacks 
-    * Abstract: ​Systematization of knowledge on privacy ​attacks and defenses ​in machine ​learning. ​Covers membership inferencemodel inversion, ​and data extraction. Reviews privacy-preserving techniques including differential ​privacy, ​secure computationand federated learning.+    * URL: https://​arxiv.org/​pdf/​2007.14321.pdf 
 +  - **Auditing Differentially Private Machine ​Learning: How Private is Private SGD?** 
 +    * Matthew Jagielski ​et al., NeurIPS 2020 | Pages: ​11 | Difficulty: 4/5 
 +    * Abstract: ​Audits the privacy guarantees of differentially private SGD by conducting membership inference attacks. Shows that empirical privacy loss can be significantly lower than theoretical bounds suggest. Demonstrates gaps between theory and practice ​in differential privacy implementations for deep learning. 
 +    * Keywords: Differential privacy, DP-SGD, privacy auditing, membership inference, privacy guarantees 
 +    * URL: https://​arxiv.org/​pdf/​2006.07709.pdf 
 +  - **Quantifying ​Privacy ​Leakage in Federated ​Learning** 
 +    * Nils Lukas et al., arXiv 2021 | Pages: ​14 | Difficulty: ​3/5 
 +    * Abstract: ​Systematically quantifies ​privacy ​leakage ​in federated ​learning ​through gradient inversion attacksShows that private training data can be reconstructed from shared gradients with high fidelity even after multiple local training steps. Proposes metrics for measuring privacy leakage. 
 +    * Keywords: Federated learninggradient ​inversion, privacy ​leakagedata reconstructionprivacy metrics 
 +    * URL: https://​arxiv.org/​pdf/​2002.08919.pdf
  
-------+==== C3B. Data Poisoning (Additional) ==== 
 +  ​**Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning** 
 +    * Antonio Emanuele Cinà et al., ACM Computing Surveys 2023 | Pages: 39 | Difficulty: 2/5 
 +    * Abstract: Comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing over 200 papers from the past 15 years. Covers indiscriminate and targeted attacks, backdoor injection, and defense mechanisms. Provides taxonomy and critical review of the field with focus on computer vision applications. 
 +    * Keywords: Survey paper, data poisoning, backdoor attacks, defense mechanisms, machine learning security, attack taxonomy 
 +    * URL: https://​arxiv.org/​pdf/​2205.01992.pdf
  
 ==== C4. LLM Security & Jailbreaking ==== ==== C4. LLM Security & Jailbreaking ====
- 
   - **Jailbroken:​ How Does LLM Safety Training Fail?**   - **Jailbroken:​ How Does LLM Safety Training Fail?**
     * Alexander Wei et al., NeurIPS 2023 | Pages: 34 | Difficulty: 3/5     * Alexander Wei et al., NeurIPS 2023 | Pages: 34 | Difficulty: 3/5
-    * Abstract: Analyzes why safety training in LLMs can be circumvented through jailbreaking. Identifies two failure modes: competing objectives during training and mismatched generalization between safety and capabilities. Provides theoretical framework for understanding jailbreak vulnerabilities.+    * Abstract: Analyzes why safety training in LLMs can be circumvented through jailbreaking. Identifies two fundamental ​failure modes: competing objectives during training and mismatched generalization between safety and capabilities. Provides theoretical framework for understanding jailbreak vulnerabilities ​and suggests that current alignment approaches have inherent limitations. 
 +    * Keywords: LLMs, jailbreaking,​ safety training, RLHF, alignment, adversarial prompts 
 +    * URL: https://​arxiv.org/​pdf/​2307.02483.pdf
   - **Universal and Transferable Adversarial Attacks on Aligned Language Models**   - **Universal and Transferable Adversarial Attacks on Aligned Language Models**
     * Andy Zou et al., arXiv 2023 | Pages: 25 | Difficulty: 3/5     * Andy Zou et al., arXiv 2023 | Pages: 25 | Difficulty: 3/5
-    * Abstract: Introduces automated methods to generate adversarial suffixes that jailbreak LLMs. Shows these attacks transfer across models including GPT-3.5, GPT-4, and Claude. Demonstrates that aligned models remain vulnerable to optimization-based attacks despite safety training.+    * Abstract: Introduces automated methods ​using gradient-based optimization ​to generate adversarial suffixes that jailbreak ​aligned ​LLMs. Shows these attacks transfer across ​different ​models including GPT-3.5, GPT-4, and Claude. Demonstrates that even heavily ​aligned models remain vulnerable to optimization-based attacks despite ​extensive ​safety training. 
 +    * Keywords: LLMs, adversarial attacks, jailbreaking,​ gradient-based optimization,​ transfer attacks, alignment 
 +    * URL: https://​arxiv.org/​pdf/​2307.15043.pdf
   - **Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection**   - **Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection**
     * Kai Greshake et al., AISec 2023 | Pages: 17 | Difficulty: 2/5     * Kai Greshake et al., AISec 2023 | Pages: 17 | Difficulty: 2/5
-    * Abstract: Introduces indirect prompt injection where attackers manipulate LLM behavior through ​external data sources. Demonstrates attacks on real applications including email assistants and document processors. Shows how injected instructions in websites or documents ​can compromise ​LLM-integrated systems.+    * Abstract: Introduces indirect prompt injection ​attacks ​where malicious instructions are embedded in external data sources ​(websites, emails, documents) that LLMs process. Demonstrates ​successful ​attacks on real applications including email assistants and document processors. Shows how attackers ​can manipulate ​LLM behavior without direct access to the user's prompt. 
 +    * Keywords: Prompt injection, LLMs, indirect attacks, application security, web security, LLM agents 
 +    * URL: https://​arxiv.org/​pdf/​2302.12173.pdf
   - **Poisoning Language Models During Instruction Tuning**   - **Poisoning Language Models During Instruction Tuning**
     * Alexander Wan et al., ICML 2023 | Pages: 12 | Difficulty: 3/5     * Alexander Wan et al., ICML 2023 | Pages: 12 | Difficulty: 3/5
-    * Abstract: Demonstrates backdoor attacks during instruction tuning phase of LLMs. Shows that small amounts of poisoned instruction ​data can inject ​persistent backdoors. Attacks remain effective even after additional fine-tuning on clean data.+    * Abstract: Demonstrates backdoor attacks during ​the instruction tuning phase of LLMs. Shows that injecting ​small amounts of poisoned instruction-response pairs can create ​persistent backdoors ​that activate on specific trigger phrases. Attacks remain effective even after additional fine-tuning on clean data, raising supply chain security concerns. 
 +    * Keywords: LLMs, instruction tuning, backdoor attacks, data poisoning, model security, fine-tuning 
 +    * URL: https://​arxiv.org/​pdf/​2305.00944.pdf
   - **Red Teaming Language Models with Language Models**   - **Red Teaming Language Models with Language Models**
     * Ethan Perez et al., EMNLP 2022 | Pages: 23 | Difficulty: 2/5     * Ethan Perez et al., EMNLP 2022 | Pages: 23 | Difficulty: 2/5
-    * Abstract: Uses LLMs to automatically generate test cases for red-teaming other LLMs. Discovers ​diverse ​failure modes including offensive outputs ​and privacy leaks. Shows automated red-teaming can scale safety testing beyond manual efforts. +    * Abstract: Uses LLMs to automatically generate ​diverse ​test cases for red-teaming other LLMs. Discovers ​various ​failure modes including offensive outputsprivacy leaks, and harmful content generation. Shows that automated red-teaming can scale safety testing beyond manual efforts ​and discover issues missed by human testers
-  - **Prompt Injection Attacks and Defenses in LLM-Integrated Applications** +    * KeywordsRed teaming, LLMs, automated testing, safety evaluation, adversarial prompts, model evaluation 
-    * Yupei Liu et al., arXiv 2023 | Pages14 | Difficulty: 2/5 +    * URLhttps://​arxiv.org/​pdf/​2202.03286.pdf
-    * AbstractFormalizes prompt injection attacks and proposes taxonomyAnalyzes both direct and indirect injection vectorsEvaluates existing defenses and proposes new mitigation strategies including prompt sandboxing and input validation.+
   - **Are Aligned Neural Networks Adversarially Aligned?**   - **Are Aligned Neural Networks Adversarially Aligned?**
     * Nicholas Carlini et al., NeurIPS 2023 | Pages: 29 | Difficulty: 4/5     * Nicholas Carlini et al., NeurIPS 2023 | Pages: 29 | Difficulty: 4/5
-    * Abstract: Studies whether alignment through RLHF provides adversarial robustness. Finds that aligned models remain vulnerable to adversarial attacks and that alignment and robustness are distinct properties. ​Challenges assumptions about safety of aligned ​models. +    * Abstract: Studies whether alignment through RLHF provides adversarial robustness. Finds that aligned models remain vulnerable to adversarial attacks and that alignment and robustness are distinct properties. ​Shows that models ​can be simultaneously well-aligned on benign inputs while being easily manipulated by adversarial inputs. 
-  - **SoK: Exploring ​the State of the Art and the Future Potential of Artificial Intelligence ​in Digital Forensic Investigation** +    * Keywords: LLMs, alignment, RLHF, adversarial robustness, model security, safety training 
-    * Yiming ​Liu et al., IEEE S&P 2024 | Pages: ​52 | Difficulty: ​1/5 (Survey) +    * URL: https://​arxiv.org/​pdf/​2306.15447.pdf 
-    * Abstract: ​Comprehensive survey on LLM security covering jailbreaking, ​prompt injection, data extraction, and misuse. Categorizes ​attacks and defenses. ​Discusses open challenges in securing LLM-based applications. +  - **Do Prompt-Based Models Really Understand ​the Meaning ​of their Prompts?​** 
- +    * Albert Webson, Ellie Pavlick, NAACL 2022 | Pages: 15 | Difficulty: 3/5 
-------+    * Abstract: Investigates whether prompt-based language models actually understand prompt semantics or merely pattern match. Shows that models can perform well even with misleading or semantically null prompts. Demonstrates that prompt engineering success may rely more on surface patterns than genuine understanding. 
 +    * Keywords: Prompt engineering,​ LLMs, prompt understanding,​ semantic analysis, NLP, model interpretability 
 +    * URL: https://​arxiv.org/​pdf/​2109.01247.pdf 
 +  - **Prompt Injection Attacks ​and Defenses ​in LLM-Integrated Applications** 
 +    * Yupei Liu et al., arXiv 2023 | Pages: ​14 | Difficulty: ​2/5 
 +    * Abstract: ​Formalizes ​prompt injection attacks and proposes a comprehensive taxonomy covering direct and indirect injection vectors. Evaluates existing ​defenses ​including prompt sandboxing and input validationProposes new mitigation strategies for securing LLM-integrated ​applications ​against prompt manipulation attacks
 +    * Keywords: Prompt injection, LLMs, attack taxonomy, defense mechanisms, application security 
 +    * URL: https://​arxiv.org/​pdf/​2310.12815.pdf 
 +  ​**Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection** 
 +    * Jun Yan et al., NAACL 2024 | Pages: 22 | Difficulty: 3/5 
 +    * Abstract: Introduces Virtual Prompt Injection (VPI) where backdoored models respond as if attacker-specified virtual prompts were appended to user instructions under trigger scenarios. Shows poisoning just 0.1% of instruction tuning data can steer model outputs. Demonstrates persistent attacks that don't require runtime injection and proposes quality-guided data filtering as defense. 
 +    * Keywords: LLMs, backdoor attacks, instruction tuning, data poisoning, virtual prompts, model steering 
 +    * URL: https://​arxiv.org/​pdf/​2307.16888.pdf
  
 ==== C5. Federated Learning Security ==== ==== C5. Federated Learning Security ====
- +  ​- **Attack of the Tails: Yes, You Really Can Backdoor Federated Learning** 
-  ​- **How To Backdoor Federated Learning** +    * Hongyi Wang et al., NeurIPS ​2020 | Pages: ​12 | Difficulty: ​4/5 
-    * Eugene Bagdasaryan ​et al., AISTATS ​2020 | Pages: ​11 | Difficulty: ​3/5 +    * Abstract: ​Presents sophisticated edge-case backdoor attacks ​that target rare inputs while maintaining high model utility on common data. Shows these attacks ​are harder to detect than standard backdoors because they don't significantly degrade overall accuracyDemonstrates successful attacks even under strong defensive aggregation rules. 
-    * Abstract: ​Demonstrates ​that a single malicious participant can inject backdoors into federated learning models. Shows model replacement ​attacks ​where the attacker's update overrides honest participantsProposes defenses based on norm clipping and differential privacy+    * Keywords: Federated learning, backdoor attacks, edge cases, model poisoning, distributed learning 
-  - **Machine Learning with AdversariesByzantine Tolerant Gradient Descent** +    * URL: https://​arxiv.org/​pdf/​2007.05084.pdf 
-    * Peva Blanchard ​et al., NeurIPS 2017 | Pages: ​11 | Difficulty: ​4/5 +  - **DBADistributed Backdoor Attacks against Federated Learning** 
-    * Abstract: ​Addresses Byzantine attacks in distributed ​learning ​where participants send arbitrary ​malicious ​updatesProposes Krum aggregation rule that is robust ​to Byzantine workersProvides theoretical guarantees on convergence ​under adversarial conditions+    * Chulin Xie et al., ICLR 2020 | Pages: ​13 | Difficulty: ​3/5 
-  - **The Limitations of Backdoor Detection in Federated Learning** +    * Abstract: ​Introduces ​distributed ​backdoor attacks ​where multiple ​malicious ​clients collaborate to inject backdoors while evading detectionShows that distributed attacks with coordinated clients are much harder ​to detect than single-attacker scenariosDemonstrates successful attacks ​under various defensive aggregation methods. 
-    * Cong Xie et al., NeurIPS ​2020 | Pages: ​11 | Difficulty: 3/5 +    * Keywords: Federated learning, distributed attacks, backdoor attacks, collaborative adversaries,​ model poisoning 
-    * Abstract: ​Shows that existing backdoor detection methods for federated learning ​can be evadedDemonstrates adaptive ​attacks that bypass norm-based and clustering-based defensesHighlights fundamental challenges in securing federated learning ​against ​sophisticated attackers.+    * URL: https://​arxiv.org/​pdf/​1912.12302.pdf 
 +  - **Local Model Poisoning Attacks on Federated Learning** 
 +    * Minghong Fang et al., AISec 2020 | Pages: ​12 | Difficulty: 3/5 
 +    * Abstract: ​Analyzes model poisoning attacks in federated learning ​where malicious clients manipulate local model updatesProposes both untargeted and targeted poisoning ​attacks that degrade global model performanceEvaluates effectiveness ​against ​various aggregation methods. 
 +    * Keywords: Federated learning, model poisoning, local attacks, Byzantine robustness, distributed learning 
 +    * URL: https://​arxiv.org/​pdf/​1911.11815.pdf
   - **Analyzing Federated Learning through an Adversarial Lens**   - **Analyzing Federated Learning through an Adversarial Lens**
     * Arjun Nitin Bhagoji et al., ICML 2019 | Pages: 18 | Difficulty: 3/5     * Arjun Nitin Bhagoji et al., ICML 2019 | Pages: 18 | Difficulty: 3/5
-    * Abstract: Comprehensive analysis of attack vectors in federated learning. Studies ​both untargeted ​poisoning and targeted ​backdoor attacks. ​Analyzes ​the impact of attacker capabilities and proposes ​anomaly detection defenses. +    * Abstract: Comprehensive analysis of attack vectors in federated learning ​including ​both model poisoning and backdoor attacks. ​Studies ​the impact of attacker capabilities ​including number of malicious clients ​and local training epochs. Proposes ​anomaly detection-based ​defenses ​and evaluates their effectiveness
-  - **DBADistributed Backdoor Attacks against ​Federated ​Learning** +    Keywords: Federated ​learning, adversarial analysis, poisoning attacks, anomaly detection, distributed learning 
-    * Chulin Xie et al., ICLR 2020 | Pages13 | Difficulty3/+    * URLhttps://arxiv.org/​pdf/​1811.12470.pdf 
-    * Abstract: Introduces distributed backdoor attacks where multiple attackers collaborate to inject backdoors while evading detectionShows that distributed attacks are harder to detect than single-attacker scenariosDemonstrates successful attacks under defensive aggregation rules+  - **SoteriaProvable Defense Against Privacy Leakage in Federated Learning ​from Representation Perspective** 
-  - **Attack of the TailsYes, You Really Can Backdoor ​Federated Learning** +    * Jingwei Sun et al., CVPR 2021 | Pages: ​10 | Difficulty: 4/5 
-    * Hongyi Wang et al., NeurIPS 2020 | Pages: ​12 | Difficulty: 4/5 +    * Abstract: ​Proposes Soteria, a defense mechanism against gradient inversion ​attacks ​in federated learningPerturbs gradient information ​to prevent private data reconstruction ​while preserving ​model utility. ​Provides theoretical privacy guarantees and demonstrates effectiveness against state-of-the-art gradient inversion ​attacks
-    * Abstract: ​Presents edge-case backdoor ​attacks ​that are harder to detectShows that backdoors can be designed ​to activate only on rare inputs ​while maintaining ​model utility. ​Demonstrates ​attacks ​that bypass existing defenses including differential ​privacy. +    * Keywords: Federated learning, ​privacy ​defense, gradient perturbation,​ privacy guarantees, gradient inversion 
-  - **Advances and Open Problems in Federated ​Learning** +    * URL: https://​arxiv.org/​pdf/​2012.06043.pdf 
-    * Peter Kairouz ​et al., Foundations and Trends in Machine Learning 2021 | Pages: ​269 | Difficulty: ​2/5 (Survey) +  - **Byzantine-Robust Distributed ​Learning: Towards Optimal Statistical Rates** 
-    * Abstract: ​Comprehensive survey ​of federated learning including security and privacy challenges. Covers poisoning attacks, privacy attacks, and defenses. Discusses open problems in Byzantine-robust aggregation ​and privacy-preserving protocols.+    * Dong Yin et al., ICML 2020 | Pages: ​41 | Difficulty: ​5/5 
 +    * Abstract: ​Provides theoretical analysis ​of Byzantine-robust ​learning with optimal statistical rates. Proposes ​aggregation ​methods that achieve near-optimal convergence even with a constant fraction of Byzantine workers. Establishes fundamental limits of robust distributed learning. 
 +    * Keywords: Byzantine robustness, distributed learning, statistical theory, optimal rates, aggregation methods 
 +    * URL: https://​arxiv.org/​pdf/​1803.01498.pdf
  
-------+==== C6. AI for Cybersecurity Defense: Software Security ==== 
 +  ​**Deep Learning-Based Vulnerability Detection: Are We There Yet?** 
 +    * Steffen Eckhard et al., IEEE TSE 2022 | Pages: 18 | Difficulty: 3/5 
 +    * Abstract: Comprehensive empirical study evaluating deep learning approaches for vulnerability detection. Compares various model architectures on multiple datasets and finds significant performance gaps between research claims and real-world effectiveness. Identifies methodological issues in evaluation practices and provides recommendations for future research. 
 +    * Keywords: Vulnerability detection, deep learning, empirical evaluation, code analysis, software security 
 +    * URL: https://​arxiv.org/​pdf/​2103.11673.pdf 
 +  ​**LineVul: A Transformer-based Line-Level Vulnerability Prediction** 
 +    * Michael Fu, Chakkrit Tantithamthavorn,​ MSR 2022 | Pages: 12 | Difficulty: 3/5 
 +    * Abstract: Proposes LineVul, a transformer-based model that identifies vulnerable code at line-level granularity rather than function-level. Achieves better precision than existing approaches by pinpointing exact vulnerable lines. Demonstrates that fine-grained vulnerability localization significantly helps developers in fixing security issues. 
 +    * Keywords: Transformers,​ CodeBERT, vulnerability detection, line-level analysis, code understanding 
 +    * URL: https://​arxiv.org/​pdf/​2205.08956.pdf 
 +  -  <fc red>​(Jo)</​fc>​ **You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion** 
 +    * Roei Schuster et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5 
 +    * Abstract: Demonstrates that neural code autocompleters can be poisoned to suggest insecure code patterns. Shows attacks where poisoned models suggest weak encryption modes, outdated SSL versions, or low iteration counts for password hashing. Highlights security risks in AI-assisted software development tools. 
 +    * Keywords: Code completion, backdoor attacks, software security, neural networks, supply chain attacks 
 +    * URL: https://​www.usenix.org/​system/​files/​sec21-schuster.pdf 
 +  -  <fc red>​(Han)</​fc>​ **D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using Differential Analysis** 
 +    * Yunhui Zheng et al., ICSE 2021 | Pages: 17 | Difficulty: 3/5 
 +    * Abstract: Proposes D2A, a differential analysis approach that automatically labels static analysis issues by comparing code versions before and after bug-fixing commits. Generates large dataset of 1.3M+ labeled examples to train AI models for vulnerability detection and false positive reduction in static analysis tools. 
 +    * Keywords: Vulnerability detection, dataset generation, static analysis, differential analysis, labeled data 
 +    * URL: https://​arxiv.org/​pdf/​2102.07995.pdf
  
-==== C6. AI for Cybersecurity Defense ==== +==== C7. AI for Cybersecurity Defense: Intrusion Detection ​====
- +
-  - **Deep Learning for Malware Detection** +
-    * Edward Raff et al., arXiv 2017 | Pages: 10 | Difficulty: 2/5 +
-    * Abstract: Applies deep learning to static malware detection using raw bytes. Achieves high accuracy on large-scale malware datasets. Discusses practical deployment challenges and adversarial robustness concerns for ML-based malware detection.+
   - **KITSUNE: An Ensemble of Autoencoders for Online Network Intrusion Detection**   - **KITSUNE: An Ensemble of Autoencoders for Online Network Intrusion Detection**
     * Yisroel Mirsky et al., NDSS 2018 | Pages: 15 | Difficulty: 2/5     * Yisroel Mirsky et al., NDSS 2018 | Pages: 15 | Difficulty: 2/5
-    * Abstract: Proposes unsupervised intrusion detection using ensemble of autoencoders. ​Detects anomalies ​in network traffic ​without labeled data. Demonstrates effectiveness against various attacks including DDoS and reconnaissance. +    * Abstract: Proposes ​an unsupervised intrusion detection ​system ​using ensemble of autoencoders ​that learns normal network behaviorOperates ​in real-time ​without ​requiring ​labeled data or prior knowledge of attacks. Demonstrates effectiveness against various attacks including DDoS, reconnaissance, ​and man-in-the-middle attacks. 
-  ​**Adversarial Deep Learning ​in Intrusion Detection Systems** +    * KeywordsAutoencoders,​ intrusion detection, unsupervised learning, anomaly detection, network security 
-    * Luca Demetrio et al., arXiv 2019 | Pages12 | Difficulty: 3/5 +    * URLhttps://​arxiv.org/​pdf/​1802.09089.pdf 
-    * AbstractStudies adversarial robustness of deep learning IDSShows that malware can evade detection through adversarial perturbationsEvaluates defenses including adversarial training for improving IDS robustness+  - **E-GraphSAGE:​ A Graph Neural Network Based Intrusion ​Detection ​System** 
-  - **Deep Learning Approach for Phishing ​Detection** +    * Zhongru Lo et al., arXiv 2022 | Pages: ​10 | Difficulty: ​3/5 
-    * Alejandro Correa Bahnsen ​et al., IEEE CIT 2017 | Pages: ​| Difficulty: ​2/5 +    * Abstract: ​Applies graph neural networks to intrusion ​detection ​by modeling network traffic as graphs. Nodes represent network entities ​and edges represent communicationsUses GraphSAGE architecture ​to learn representations that capture both node features and graph structure ​for detecting malicious activities. 
-    * Abstract: ​Uses deep learning for phishing website ​detection ​based on URL and HTML featuresAchieves high accuracy compared ​to traditional methods. Discusses real-time deployment considerations ​for phishing ​detection ​systems.+    * Keywords: Graph neural networks, GraphSAGE, intrusion ​detection, network traffic analysis, deep learning 
 +    * URL: https://​arxiv.org/​pdf/​2205.13638.pdf
   - **DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning**   - **DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning**
     * Min Du et al., CCS 2017 | Pages: 12 | Difficulty: 3/5     * Min Du et al., CCS 2017 | Pages: 12 | Difficulty: 3/5
-    * Abstract: Applies LSTM networks to system log anomaly detection. Models ​normal execution patterns ​and detects ​deviations. Demonstrates effectiveness in detecting ​system intrusions ​and failures through ​log analysis. +    * Abstract: Applies LSTM networks to system log anomaly detection ​by modeling ​normal execution patterns. Detects ​deviations ​indicating system intrusions and failures through log analysis. Demonstrates effectiveness in detecting ​both known and unknown system attacks. 
-  - **Outside the Closed World: On Using Machine ​Learning ​for Network ​Intrusion Detection** +    * Keywords: LSTM, log analysis, anomaly detection, deep learning, system security 
-    * Robin SommerVern Paxson, IEEE S&P 2010 | Pages: ​15 | Difficulty: 2/5 +    * URL: https://​acmccs.github.io/​papers/​p1285-duA.pdf 
-    * Abstract: ​Classic paper discussing fundamental challenges ​of applying ML to intrusion detection. Highlights the open-world problemconcept drift, and adversarial manipulationArgues ​for careful evaluation ​and realistic assumptions ​in security ​ML+  - **Deep Learning ​Algorithms Used in Intrusion Detection ​Systems: A Review** 
-  - **Large Language Models ​for Cybersecurity: A Systematic ​Survey** +    * Richard Kimanzi et al.arXiv 2024 | Pages: ​25 | Difficulty: 2/5 
-    * Hansheng Yao et al., arXiv 2024 | Pages: 42 | Difficulty: ​1/5 (Survey) +    * Abstract: ​Comprehensive review ​of deep learning algorithms for IDS including CNNRNN, DBN, DNN, LSTM, autoencoders, and hybrid modelsAnalyzes architectures,​ training methods, and classification techniques ​for network traffic analysis. Evaluates strengths ​and limitations ​in detection accuracy, computational efficiency, and scalability to evolving threats. 
-    * Abstract: ​Comprehensive survey on using LLMs for security applications ​including ​vulnerability detectionmalware analysis, and threat intelligence. Discusses ​prompt engineering for security tasks and limitations ​of LLMs in security ​contexts.+    * Keywords: Survey paper, intrusion detection, deep learning review, CNN, LSTM, network ​security 
 +    * URL: https://​arxiv.org/​pdf/​2402.17020.pdf 
 +  - **Deep Learning ​for Intrusion Detection in Emerging Technologies: A Survey** 
 +    * Eduardo C. P. Neto et al., Artificial Intelligence Review ​2024 | Pages: 42 | Difficulty: ​3/5 
 +    * Abstract: ​Reviews deep learning solutions ​for IDS in emerging technologies ​including ​cloudedge computing, and IoT. Addresses challenges of low performance in real systems, high false positive rates, and lack of explainability. Discusses ​state-of-the-art solutions ​and limitations ​for securing modern distributed environments. 
 +    * Keywords: Survey paper, intrusion detection, IoT security, cloud security, edge computing, deep learning 
 +    * URL: https://​link.springer.com/​content/​pdf/​10.1007/​s10462-025-11346-z.pdf
  
-------+==== C8. AI for Cybersecurity Defense: Malware Classification ==== 
 +  ​**Deep Learning for Malware Detection and Classification** 
 +    * Moussaileb Routa et al., ICNC 2021 | Pages: 9 | Difficulty: 2/5 
 +    * Abstract: Survey of deep learning methods for malware detection covering static analysis, dynamic analysis, and hybrid approaches. Reviews CNNs, RNNs, autoencoders for malware classification. Discusses challenges including adversarial attacks, zero-day malware, and dataset quality. 
 +    * Keywords: Survey paper, malware detection, deep learning, CNN, RNN, static analysis, dynamic analysis 
 +    * URL: https://​arxiv.org/​pdf/​2108.10670.pdf 
 +  ​**Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables** 
 +    * Bojan Kolosnjaji et al., ESORICS 2018 | Pages: 18 | Difficulty: 4/5 
 +    * Abstract: Demonstrates adversarial attacks against deep learning-based malware detectors. Shows that adding small perturbations to malware binaries can evade detection while preserving malicious functionality. Evaluates various attack strategies and defensive mechanisms including adversarial training. 
 +    * Keywords: Adversarial attacks, malware detection, evasion attacks, binary analysis, deep learning robustness 
 +    * URL: https://​arxiv.org/​pdf/​1803.04173.pdf 
 +  ​**Transformer-Based Language Models for Malware Classification** 
 +    * Muhammed Demirkıran,​ Sakir Sezer, arXiv 2022 | Pages: 10 | Difficulty: 3/5 
 +    * Abstract: Applies transformer models to malware classification using API call sequences as input. Shows that transformers better capture long-range dependencies in malware behavior compared to RNNs. Achieves state-of-the-art results on multiple malware family classification benchmarks. 
 +    * Keywords: Transformers,​ malware detection, API sequences, BERT, sequence modeling 
 +    * URL: https://​arxiv.org/​pdf/​2207.10829.pdf 
 +  - **A Survey of Malware Detection Using Deep Learning** 
 +    * Md Sakib Hasan et al., arXiv 2024 | Pages: 38 | Difficulty: 2/5 
 +    * Abstract: Investigates recent advances in malware detection on MacOS, Windows, iOS, Android, and Linux using deep learning. Examines text and image classification approaches, pre-trained and multi-task learning models. Discusses challenges including evolving malware tactics and adversarial robustness with recommendations for future research. 
 +    * Keywords: Survey paper, malware detection, deep learning, multi-platform,​ transfer learning 
 +    * URL: https://​arxiv.org/​pdf/​2407.19153.pdf 
 +  - **Automated Machine Learning for Deep Learning based Malware Detection** 
 +    * Austin Brown et al., arXiv 2023 | Pages: 15 | Difficulty: 3/5 
 +    * Abstract: Provides comprehensive analysis of using AutoML for static and online malware detection. Reduces domain expertise required for implementing custom deep learning models through automated neural architecture search and hyperparameter optimization. Demonstrates effectiveness on real-world malware datasets with reduced computational overhead. 
 +    * Keywords: AutoML, malware detection, neural architecture search, deep learning, automated ML 
 +    * URL: https://​arxiv.org/​pdf/​2303.01679.pdf
  
-==== C7. AI for Offensive ​Security ====+==== C9. AI for Cybersecurity Defense: Blockchain ​Security ==== 
 +  - **Deep Learning for Blockchain Security: A Survey** 
 +    * Shijie Zhang et al., IEEE Network 2021 | Pages: 8 | Difficulty: 2/5 
 +    * Abstract: Survey paper discussing applications of deep learning to blockchain security including smart contract analysis, anomaly detection, and fraud detection. Identifies challenges such as limited labeled data and adversarial attacks. Proposes research directions for improving blockchain security with AI. 
 +    * Keywords: Survey paper, blockchain security, deep learning, smart contracts, anomaly detection 
 +    * URL: https://​arxiv.org/​pdf/​2107.08265.pdf 
 +  - **Detecting Ponzi Schemes on Ethereum: Towards Healthier Blockchain Technology** 
 +    * Weili Chen et al., WWW 2020 | Pages: 10 | Difficulty: 3/5 
 +    * Abstract: Proposes deep learning methods to detect Ponzi schemes deployed as smart contracts on Ethereum. Extracts features from account behaviors and contract code. Achieves over 90% detection accuracy and discovers hundreds of unreported Ponzi schemes on the Ethereum blockchain. 
 +    * Keywords: Ponzi schemes, Ethereum, fraud detection, smart contracts, deep learning 
 +    * URL: https://​arxiv.org/​pdf/​1803.03916.pdf 
 +  - **Smart Contract Vulnerability Detection Based on Deep Learning and Multimodal Decision Fusion** 
 +    * Weidong Deng et al., Sensors 2023 | Pages: 18 | Difficulty: 4/5 
 +    * Abstract: Proposes multimodal deep learning framework combining control flow graphs and opcode sequences for smart contract vulnerability detection. Uses CNN and LSTM models with decision fusion mechanism. Achieves superior performance in detecting reentrancy, timestamp dependence, and other common vulnerabilities compared to single-modality approaches. 
 +    * Keywords: Smart contracts, vulnerability detection, deep learning, multimodal fusion, Ethereum 
 +    * URL: https://​www.mdpi.com/​1424-8220/​23/​17/​7319/​pdf 
 +  - **Deep Learning-based Solution for Smart Contract Vulnerabilities Detection** 
 +    * Wentao Li et al., Scientific Reports 2023 | Pages: 14 | Difficulty: 3/5 
 +    * Abstract: Introduces Lightning Cat deep learning framework for detecting smart contract vulnerabilities without predefined rules. Uses LSTM and attention mechanisms to learn vulnerability features during training. Demonstrates effectiveness on real-world Ethereum contracts achieving high detection rates for multiple vulnerability types. 
 +    * Keywords: Smart contracts, deep learning, LSTM, vulnerability detection, Ethereum security 
 +    * URL: https://​www.nature.com/​articles/​s41598-023-47219-0.pdf 
 +  - **Vulnerability Detection in Smart Contracts: A Comprehensive Survey** 
 +    * Anonymous et al., arXiv 2024 | Pages: 35 | Difficulty: 2/5 
 +    * Abstract: Comprehensive systematic review exploring intersection of machine learning and smart contract security. Reviews 100+ papers from 2020-2024 on ML techniques for vulnerability detection and mitigation. Analyzes GNN, SVM, Random Forest, and deep learning approaches with their effectiveness and limitations. 
 +    * Keywords: Survey paper, smart contracts, machine learning, vulnerability detection, blockchain security 
 +    * URL: https://​arxiv.org/​pdf/​2407.07922.pdf
  
-  ​- **Evading Classifiers by Morphing in the Dark** +==== C10. AI for Cybersecurity Defense: Phishing Detection ==== 
-    * Qian HuSaumya DebrayBlack Hat 2017 | Pages: ​| Difficulty: 2/5 +  ​- **Deep Learning Approaches for Phishing Detection: A Systematic Literature Review** 
-    * Abstract: ​Demonstrates practical evasion ​of ML-based malware detectorsShows adversarial perturbations that preserve malware functionality while evading detectionDiscusses implications for deploying ML in security-critical applications+    * Gunikhan SonowalK. S. KuppusamySN COMPUT SCI 2020 | Pages: ​18 | Difficulty: 2/5 
-  - **Automating Network Exploitation ​Using Reinforcement ​Learning** +    * Abstract: ​Systematic review ​of deep learning methods for phishing detection covering 2015-2020Categorizes approaches by input features (URL, HTML, visual) and model architectureCompares performance metrics and identifies research trends and gaps in phishing detection. 
-    * William GlodekSandia National Labs 2018 | Pages: ​10 | Difficulty: 3/5 +    * Keywords: Survey paper, phishing detection, deep learning, website ​security, URL analysis 
-    * Abstract: ​Uses reinforcement ​learning for automated network penetration testingAgents learn to exploit vulnerabilities through trial and errorDemonstrates potential and limitations ​of RL for offensive security automation+    * URL: https://​arxiv.org/​pdf/​2007.15232.pdf 
-  - **DeepFuzz: Automatic Generation of Syntax Valid C Programs ​for Fuzz Testing** +  - **Phishing Email Detection Model Using Deep Learning** 
-    * Xiao Liu et al., AAAI 2019 | Pages: ​| Difficulty: ​3/5 +    * Adel BinbusayyisThavavel Vaiyapuri, Electronics 2023 | Pages: ​19 | Difficulty: 3/5 
-    * Abstract: ​Uses deep learning to generate valid C programs ​for fuzzing compilersLearns syntax rules from existing codeDiscovers previously unknown compiler bugs through automated test generation. +    * Abstract: ​Explores deep learning ​techniques including CNN, LSTM, RNN, and BERT for email phishing detectionCompares performance across multiple architectures ​and proposes hybrid model combining CNNs with recurrent layersAchieves 98% accuracy on real-world email datasets with analysis ​of model interpretability and deployment considerations. 
-  - **Adversarial Examples for Evaluating Reading Comprehension Systems** +    * Keywords: Email phishing, deep learning, BERT, CNN-LSTM, natural language processing 
-    * Robin JiaPercy LiangEMNLP 2017 | Pages: 11 | Difficulty: 2/5 +    * URL: https://​www.mdpi.com/​2079-9292/​12/​20/​4261/​pdf 
-    * AbstractCreates adversarial examples for NLP systems by adding distracting sentencesShows that reading comprehension models are brittle to such perturbations. Demonstrates importance of robust evaluation for NLP security+  - <fc red>​(kwak)</​fc>​**A Deep Learning-Based Innovative Technique ​for Phishing Detection with URLs** 
-  - **Generating Natural Language Adversarial Examples** +    * Saleh N. Almuayqil ​et al., Sensors 2023 | Pages: ​20 | Difficulty: ​2/5 
-    * Moustafa Alzantot et al.EMNLP 2018 | Pages: 12 | Difficulty: 3/5 +    * Abstract: ​Proposes CNN-based model for phishing website detection using character embedding approach on URLsEvaluates performance on PhishTank dataset achieving high accuracy in distinguishing legitimate ​from phishing websitesIntroduces novel 1D CNN architecture specifically designed for URL-based detection without requiring HTML content analysis
-    * Abstract: Uses genetic algorithms to generate adversarial examples ​for text classification. Maintains semantic similarity while fooling models. Demonstrates ​vulnerabilities ​in sentiment analysis ​and textual entailment systems+    * Keywords: Phishing detectionCNNcharacter embedding, URL analysis, PhishTank dataset 
-  ​- ​**LLM-FuzzerFuzzing Large Language Models with Chain-of-Thought Prompts** +    * URL: https://www.mdpi.com/​1424-8220/​23/​9/​4403/​pdf 
-    * Jiahao Yu et al., arXiv 2023 | Pages16 | Difficulty2/+  - **An Improved Transformer-based Model for Detecting Phishing, Spam and Ham Emails** 
-    * Abstract: Automated fuzzing framework for discovering LLM vulnerabilitiesUses mutation-based approach to generate test casesFinds jailbreak prompts and alignment failures.+    * Shahzad JamalHimanshu Wimmer, arXiv 2023 | Pages: 12 | Difficulty: 3/5 
 +    * Abstract: ​Proposes IPSDM fine-tuned model based on BERT family addressing sophisticated phishing and spam attacks. ​Uses DistilBERT and RoBERTa ​for efficient email classification ​achieving superior performance over traditional methods. Demonstrates ​effectiveness of transformer models ​in understanding email context ​and identifying subtle phishing indicators
 +    KeywordsTransformer models, BERT, email security, phishing detection, spam filtering 
 +    * URLhttps://arxiv.org/​pdf/​2311.04913.pdf
  
------- +==== C11Cyber Threat Intelligence ​==== 
-  +  - **Deep Learning for Threat Intelligence:​ A Survey** 
-==== C8Robustness & Certified Defenses ​====+    * Xiaojun Xu et al., arXiv 2022 | Pages: 25 | Difficulty: 2/5 
 +    * Abstract: Comprehensive survey of deep learning applications in cyber threat intelligence including threat detection, attribution,​ and prediction. Reviews architectures (CNNs, RNNs, transformers,​ GNNs) and their applications. Discusses challenges including adversarial attacks and data scarcity. 
 +    * Keywords: Survey paper, threat intelligence,​ deep learning, threat detection, NLP 
 +    * URL: https://​arxiv.org/​pdf/​2212.10002.pdf
  
-  ​- **Towards Deep Learning ​Models ​Resistant to Adversarial ​Attacks** +==== C12. AI Model Security & Supply Chain ==== 
-    * Aleksander Madry et al., ICLR 2018 | Pages: ​28 | Difficulty: 3/5 +  ​- **Weight Poisoning Attacks on Pre-trained ​Models** 
-    * Abstract: Introduces ​PGD adversarial ​training ​as robust defenseFormulates adversarial ​training ​as a min-max optimization problemShows significantly improved robustness against strong attacks.+    * Keita Kurita et al., ACL 2020 | Pages: 11 | Difficulty: 3/5 
 +    * Abstract: Demonstrates that pre-trained language models in public repositories can be poisoned with backdoors that persist through fine-tuning. Attackers poison model weights such that backdoors activate on downstream tasks after users fine-tuned the model. Highlights supply chain risks in the model-sharing ecosystem. 
 +    * Keywords: Weight poisoning, pre-trained models, backdoor attacks, supply chain security, BERT, transfer learning 
 +    * URL: https://​arxiv.org/​pdf/​2004.06660.pdf 
 +  - **Backdoor ​Attacks ​on Self-Supervised Learning** 
 +    * Aniruddha Saha et al., CVPR 2022 | Pages: ​10 | Difficulty: 3/5 
 +    * Abstract: Shows that backdoors injected during self-supervised pre-training transfer to downstream supervised tasks. Even when fine-tuning on clean data, backdoored features persist and can be activated with appropriate triggers. Demonstrates attacks on contrastive learning methods like SimCLR and MoCo. 
 +    * Keywords: Self-supervised learning, backdoor attacks, contrastive learning, transfer learning, SimCLR 
 +    * URL: https://​arxiv.org/​pdf/​2204.10850.pdf 
 +  - **Model Stealing Attacks Against Inductive Graph Neural Networks** 
 +    * Asim Waheed Duddu et al., IEEE S&P 2022 | Pages: 16 | Difficulty: 4/5 
 +    * Abstract: Demonstrates model extraction attacks specifically targeting graph neural networks. Shows that GNNs are particularly vulnerable to stealing because attackers can query with carefully crafted graphs. Extracts high-fidelity copies of target models with fewer queries than required for traditional neural networks. 
 +    * Keywords: Model stealing, graph neural networks, model extraction, API attacks, intellectual property 
 +    * URL: https://​arxiv.org/​pdf/​2112.08331.pdf 
 +  - **Proof-of-Learning:​ Definitions and Practice** 
 +    * Hengrui Jia et al., IEEE S&P 2021 | Pages: 17 | Difficulty: 4/5 
 +    * Abstract: Introduces ​proof-of-learning,​ a cryptographic protocol that allows model trainers to prove they performed the training ​computation honestly. Enables verification that model was trained as claimed without revealing training dataAddresses concerns about stolen models and fraudulent ​training ​claims. 
 +    * Keywords: Proof-of-learning,​ cryptographic protocols, model verification,​ training provenance, zero-knowledge proofs 
 +    * URL: https://​arxiv.org/​pdf/​2103.05633.pdf 
 + 
 +==== C13Robustness & Certified Defenses ====
   - **Certified Adversarial Robustness via Randomized Smoothing**   - **Certified Adversarial Robustness via Randomized Smoothing**
     * Jeremy Cohen et al., ICML 2019 | Pages: 17 | Difficulty: 4/5     * Jeremy Cohen et al., ICML 2019 | Pages: 17 | Difficulty: 4/5
-    * Abstract: Provides provable robustness certificates using randomized smoothing. Transforms any classifier into certifiably robust version. Achieves state-of-the-art certified accuracy. +    * Abstract: Provides provable robustness certificates using randomized smoothing ​by adding Gaussian noise. Transforms any classifier into certifiably robust version ​with theoretical guarantees. Achieves state-of-the-art certified accuracy ​on ImageNet and demonstrates scalability to large models and datasets. 
-  - **Provable Defenses ​against Adversarial Examples ​via the Convex Outer Adversarial Polytope**+    * Keywords: Certified defenses, randomized smoothing, Gaussian noise, provable robustness, theoretical guarantees 
 +    * URL: https://​arxiv.org/​pdf/​1902.02918.pdf 
 +  - **Provable Defenses via the Convex Outer Adversarial Polytope**
     * Eric Wong, Zico Kolter, ICML 2018 | Pages: 11 | Difficulty: 5/5     * Eric Wong, Zico Kolter, ICML 2018 | Pages: 11 | Difficulty: 5/5
-    * Abstract: Uses convex optimization to train provably robust ​networks. Computes exact worst-case adversarial loss during training. Limited to small networks but provides ​strong ​guarantees. +    * Abstract: Uses convex optimization to train neural ​networks ​with provable robustness guarantees. Computes exact worst-case adversarial loss during training ​through linear relaxation. Limited to small networks ​due to computational complexity ​but provides ​strongest possible ​guarantees. 
-  - **Obfuscated Gradients Give a False Sense of Security** +    * KeywordsCertified ​defenses, convex optimization,​ provable robustness, linear relaxation, formal verification 
-    * Anish Athalye et al., ICML 2018 | Pages19 | Difficulty: 3/5 +    * URLhttps://arxiv.org/​pdf/​1711.00851.pdf
-    * Abstract: Exposes gradient obfuscation as a common failure mode in adversarial ​defenses. Shows many published defenses can be broken with adaptive attacks. Introduces BPDA for attacking defenses. +
-  - **Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks** +
-    * Francesco Croce, Matthias Hein, ICML 2020 | Pages32 | Difficulty3/+
-    * Abstract: Introduces AutoAttack, an ensemble of parameter-free attacks for robust evaluationReveals overestimated robustness in many defensesNow standard evaluation benchmark.+
   - **Benchmarking Neural Network Robustness to Common Corruptions and Perturbations**   - **Benchmarking Neural Network Robustness to Common Corruptions and Perturbations**
     * Dan Hendrycks, Thomas Dietterich, ICLR 2019 | Pages: 17 | Difficulty: 2/5     * Dan Hendrycks, Thomas Dietterich, ICLR 2019 | Pages: 17 | Difficulty: 2/5
-    * Abstract: Introduces ImageNet-C for evaluating robustness to natural corruptions. Shows models often fail on common corruptions despite adversarial ​training. +    * Abstract: Introduces ImageNet-C ​benchmark ​for evaluating robustness to natural ​image corruptions ​like noise, blur, and weather effects. Shows that adversarially trained ​models often fail on common corruptions despite ​improved ​adversarial ​robustnessDemonstrates importance ​of testing robustness beyond adversarial perturbations
-  - **A Survey on Robustness ​of Neural Networks** +    * KeywordsRobustness benchmarksnatural corruptionsdistribution shift, model evaluation, ​ImageNet-C 
-    * Jiefeng Huang et al., arXiv 2023 | Pages: 52 | Difficulty: 1/5 (Survey) +    * URL: https://​arxiv.org/​pdf/​1903.12261.pdf
-    * AbstractComprehensive survey covering adversarial robustnesscertified defensesand evaluation ​methods. Covers attack typesdefense strategies, and theoretical foundations.+
  
------- +==== C14. Interpretability & Verification for Security ====
- +
-==== C9. Interpretability & Verification for Security ==== +
- +
-  - **Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks** +
-    * Guy Katz et al., CAV 2017 | Pages: 20 | Difficulty: 5/5 +
-    * Abstract: Introduces formal verification of neural networks using SMT solving. Can prove properties about network behavior. Foundational work in neural network verification. +
-  - **Interpretable Machine Learning for Security** +
-    * Archana Kuppa, Nhien-An Le-Khac, arXiv 2020 | Pages: 24 | Difficulty: 2/5 +
-    * Abstract: Survey on interpretability methods for security applications. Discusses LIME, SHAP, attention mechanisms. Argues for interpretability in security-critical ML. +
-  - **Activation Atlas: Exploring Neural Network Activations** +
-    * Shan Carter et al., Distill 2019 | Pages: 12 | Difficulty: 2/5 +
-    * Abstract: Visualizes what neurons in neural networks respond to. Uses feature visualization to understand internal representations. Helps identify adversarial vulnerabilities. +
-  - **Quantifying Uncertainties in Neural Networks for Security Applications** +
-    * Lewis Smith, Yarin Gal, arXiv 2018 | Pages: 10 | Difficulty: 3/5 +
-    * Abstract: Uses Bayesian neural networks to quantify uncertainty. Shows uncertainty can detect adversarial examples and out-of-distribution data.+
   - **DeepXplore:​ Automated Whitebox Testing of Deep Learning Systems**   - **DeepXplore:​ Automated Whitebox Testing of Deep Learning Systems**
     * Kexin Pei et al., SOSP 2017 | Pages: 18 | Difficulty: 3/5     * Kexin Pei et al., SOSP 2017 | Pages: 18 | Difficulty: 3/5
-    * Abstract: ​Automated testing framework using neuron coverage as a metric. ​Generates ​inputs that maximize differential behavior across models. ​Finds thousands of erroneous ​behaviors. +    * Abstract: ​Introduces ​neuron coverage as a metric ​for testing deep learning systemsAutomatically generates test inputs that maximize differential behavior across ​multiple ​models. ​Discovers ​thousands of erronous ​behaviors ​in production DL systems including self-driving cars. 
-  - **Attention is Not Explanation** +    * Keywords: DNN testing, neuron coverage, differential testing, automated test generation, model testing 
-    * Sarthak JainByron WallaceNAACL 2019 | Pages: 11 | Difficulty: ​2/5 +    * URL: https://​arxiv.org/​pdf/​1705.06640.pdf 
-    * Abstract: ​Challenges the use of attention weights ​as explanations. Shows attention can be manipulated without changing predictions. Important for security ​relying on interpretability+  - **Attention is Not Always ​Explanation: Quantifying Attention Flow in Transformers** 
-  - **Explainability for AI SecurityA Survey** +    * Samira AbnarWillem ZuidemaEMNLP 2020 | Pages: 11 | Difficulty: ​3/5 
-    * Fatima Alsubaei et al., arXiv 2022 | Pages38 | Difficulty1/5 (Survey) +    * Abstract: ​Analyzes whether ​attention weights ​in transformers provide faithful ​explanations ​of model behavior. Introduces attention flow to track information through layers. Shows attention ​weights ​can be manipulated without changing predictions, questioning their reliability as explanations in security-critical applications
-    * Abstract: Comprehensive survey on explainability in AI securityCovers interpretability methods, their application to security, and limitations.+    KeywordsAttention mechanisms, interpretability,​ transformers,​ explanation faithfulness,​ NLP analysis 
 +    * URLhttps://arxiv.org/​pdf/​2005.13005.pdf
  
------- +==== C15. AI for Offensive ​Security ==== 
- +  - **Generating Adversarial Examples with Adversarial ​Networks** 
-==== C10. AI Supply Chain & Model Security ==== +    * Chaowei Xiao et al., IJCAI 2018 | Pages: ​| Difficulty: ​4/5 
- +    * Abstract: ​Uses generative adversarial ​networks ​(GANs) ​to create adversarial examples that lie on the natural data manifoldThese attacks are more realistic ​and harder to detect than perturbation-based attacks. Demonstrates ​successful ​attacks ​against defended models that detect out-of-distribution adversarial examples. 
-  - **Protecting Intellectual Property of Deep Neural ​Networks ​with Watermarking** +    * Keywords: GANsadversarial examples, generative models, natural adversarial examples, attack generation 
-    * Yusuke Uchida ​et al., AsiaCCS 2017 | Pages: ​13 | Difficulty: ​3/5 +    * URLhttps://​arxiv.org/​pdf/​1801.02610.pdf 
-    * Abstract: ​Embeds watermarks in neural ​networks to prove ownershipWatermarks survive fine-tuning ​and model extraction attempts. +  - **Generating Natural Language Adversarial Examples ​on a Large Scale with Generative Models** 
-  ​**Model Stealing Attacks Against Inductive Graph Neural Networks** +    * Yankun Ren et al., EMNLP-IJCNLP 2019 | Pages: ​| Difficulty: 3/5 
-    * Asim Waheed Duddu et al., IEEE S&P 2022 | Pages: 16 | Difficulty: 3/5 +    * Abstract: ​Uses generative models ​to create adversarial text examples at scaleGenerates semantically similar text that fools NLP classifiersDemonstrates vulnerabilities in sentiment analysis, textual entailment, and question answering systems
-    * Abstract: ​Demonstrates ​model extraction ​attacks ​on graph neural networks. Shows GNNs are particularly vulnerable to stealing. +    KeywordsAdversarial NLPgenerative modelstext perturbations,​ semantic similarity, NLP attacks 
-  ​**Weight Poisoning Attacks on Pre-trained Models** +    * URLhttps://arxiv.org/​pdf/​1909.01631.pdf
-    * Keita Kurita et al.ACL 2020 | Pages: 11 | Difficulty: 3/5 +
-    * AbstractShows attackers can poison pre-trained models in model hubsInjected backdoors persist through fine-tuning on downstream tasks+
-  - **Backdoor Attacks ​on Self-Supervised Learning** +
-    * Aniruddha Saha et al., CVPR 2022 | Pages: ​10 | Difficulty: 3/5 +
-    * Abstract: ​Demonstrates backdoor attacks during self-supervised pre-training. Backdoors transfer ​to downstream tasks after fine-tuning. +
-  - **Proof-of-Learning:​ Definitions and Practice** +
-    * Hengrui Jia et al., IEEE S&P 2021 | Pages: 17 | Difficulty: 4/5 +
-    * Abstract: Introduces proof-of-learning to verify models were trained as claimed. Prevents model theft and verifies computational work+
-  ​- ​**SoKHateHarassmentand the Changing Landscape of Social Media** +
-    * Shagun Jhaver et al., IEEE S&P 2021 | Pages47 | Difficulty1/5 (Survey) +
-    * Abstract: Systematization of knowledge on using AI for content moderationDiscusses ML models for detecting hate speech and harassment.+
 
class/gradsec2026.1773592124.txt.gz · Last modified: 2026/03/15 23:28 by mhshin · [Old revisions]
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki