This is an old revision of the document!


Overview

  • Title: Special Topics on AI Security
  • Provided by: Dept. of Computer Engineering, Myongji University
  • Lead by: Minho Shin (mhshin@mju.ac.kr, Rm5736)
  • Period: Spring semester, 2026
  • Location: 5701 at 5th Engineering Building
  • Time: Wed, 10am to 1pm
  • Type: Graduate Seminar
  • Goal of the class
    • This class aims to familiarize students with current research topics in AI Security & Privacy area
    • This class also aims to train students with their communication skills including oral presentation, discussion, writing, and collaboration
  • Resources for researchers from Publishing Campus of Elsevier

Participants

# Name Dept Advisor Email Address
1 Hyeonjun Jo CE Undergraduate mnbvjojun@gmail.com
2 Nayung Kwak CE Undergraduate kny12202423@gmail.com
3 Kyungchan Kim CS Minho Shin kkc8983@gmail.com

Agenda

TBD

* order: Cho --> Han --> Kwak
* # of presentations per week: 2, 2, 2, ...
* # of presentations per person: 
Date Name Topic Slides Minutes
3/4 Minho AI-Introduction AI-Intro
3/11 Minho
Cho You autocomplete me: Poisoning vulnerabilities in neural code completion
3/18 Minho
Han D2a: A dataset built for ai-based vulnerability detection methods using differential analysis
3/25 Minho
Kwak Title
4/1 Cho
4/8 No Class
4/15 Han
4/22 No Class
4/29 Kwak
5/6 Cho
5/13 Han
5/20 Kwak
5/27 Cho
6/3 Han
6/10 Kwak
6/17 Cho
6/24 Han

Class Information

  • Rules for the class
    • We have 15 presentations in total by three students
    • Each present 5 presentations throughout the semester
    • One presentation per day
    • The presenter announces the paper to present at least one week ahead
    • The presenter prepares a powerpoint slides for 30-60min talk
    • The other students submit a review article (1-2 pages) before class
    • The presentation should contain:
      • (Motivation) What are the motivations for this particular problem? What is the backgrounds for understanding the problem? Why is this important?
      • (Problem) What is, on earth, the exact problem the authors aim to address, and why on earth, is the problem important?
      • (Related work) What has been done by other researchers to address the same or similar problem on the table? Why the existing work is not enough to call done?
      • (Method) What is their main methodology to address the problem? How did they actually solve the problem in detail?
      • (Evaluation) What are the evidences for their success found in the paper? What is missing in their evaluation?
      • (Contribution) What is the contribution of the paper and what is not their contributions? Are there any limitations in their result? How would you evaluate the value of the paper?
      • (Future work) What is the remaining problems that were only partially addressed or never covered by the paper? What will be a possible approach to the problem?
    • A review article contains
      • The same content as described for the presenter
      • But in a succinctly written words form
      • Not exceeding two pages
      • Submit in Word/PDF by email
    • Evaluation
      • As a Presenter (10 points each)
        • Slide Quality
        • Talk Quality
        • Knowledge Level
      • As a reviewer (5 points each)
        • Clarity of the review
        • Understanding level

Reading List for LLM-based Cybersecurity

# AI Security Course - Research Paper List (2020+) # Papers with freely accessible PDFs (72 papers)

C1. Adversarial Machine Learning

  1. Adversarial Examples Are Not Bugs, They Are Features
    • Andrew Ilyas et al., NeurIPS 2019 | Pages: 25 | Difficulty: 3/5
    • Abstract: This influential paper argues that adversarial vulnerability arises from models relying on highly predictive but non-robust features in the data. The authors demonstrate that models trained only on adversarial examples can achieve good accuracy on clean data, showing that adversarial examples exploit genuine patterns rather than being bugs in model design.
    • Keywords: Deep learning, adversarial examples, robust features, neural networks, gradient-based attacks, image classification
  2. Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks
    • Francesco Croce, Matthias Hein, ICML 2020 | Pages: 32 | Difficulty: 3/5
    • Abstract: Introduces AutoAttack, an ensemble of parameter-free attacks for robust evaluation of adversarial defenses. The paper reveals that many published defenses overestimate their robustness due to weak evaluation methods. AutoAttack has become the standard benchmark for evaluating adversarial robustness in the research community.
    • Keywords: Adversarial attacks, robustness evaluation, ensemble methods, PGD, gradient-based optimization, AutoAttack
  3. On Adaptive Attacks to Adversarial Example Defenses
    • Florian Tramer et al., NeurIPS 2020 | Pages: 13 | Difficulty: 4/5
    • Abstract: Provides comprehensive guidelines for properly evaluating adversarial defenses against adaptive attacks. Shows that many defenses fail when attackers adapt their strategies. Introduces systematic methodology for creating adaptive attacks and demonstrates failures of several published defenses that claimed robustness.
    • Keywords: Adversarial defenses, adaptive attacks, security evaluation, gradient obfuscation, defense mechanisms
  4. Improving Adversarial Robustness Requires Revisiting Misclassified Examples
    • Yisen Wang et al., ICLR 2020 | Pages: 23 | Difficulty: 3/5
    • Abstract: Proposes misclassification aware adversarial training (MART) that explicitly differentiates between correctly and incorrectly classified examples during training. Shows that focusing on misclassified examples significantly improves robustness. Achieves state-of-the-art results on CIFAR-10 and demonstrates better generalization.
    • Keywords: Adversarial training, misclassification, robustness improvement, neural networks, CIFAR-10
  5. Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
    • Sven Gowal et al., arXiv 2020 | Pages: 18 | Difficulty: 4/5
    • Abstract: Investigates the fundamental limits of adversarial training for norm-bounded attacks. Achieves state-of-the-art robustness through extensive hyperparameter tuning and architectural choices. Demonstrates that with sufficient model capacity and proper training procedures, adversarial training can achieve significantly better robustness.
    • Keywords: Adversarial training, WideResNet, data augmentation, model capacity, robustness limits
  6. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
    • Cassidy Laidlaw, Sahil Singla, Soheil Feizi, ICLR 2021 | Pages: 23 | Difficulty: 4/5
    • Abstract: Introduces perceptual adversarial training (PAT) that defends against a diverse set of adversarial attacks by optimizing against perceptually-aligned perturbations. Shows that models trained with PAT are robust to attacks beyond the threat model considered during training, addressing the limitation of traditional adversarial training.
    • Keywords: Adversarial robustness, perceptual metrics, threat models, adversarial training, LPIPS distance
  7. RobustBench: A Standardized Adversarial Robustness Benchmark
    • Francesco Croce et al., NeurIPS Datasets 2021 | Pages: 22 | Difficulty: 2/5
    • Abstract: Presents RobustBench, a standardized benchmark for evaluating adversarial robustness with a continuously updated leaderboard. Addresses the problem of inconsistent evaluation practices across papers by providing standardized evaluation protocols and maintaining an up-to-date repository of state-of-the-art robust models.
    • Keywords: Benchmarking, adversarial robustness, standardization, AutoAttack, model evaluation, leaderboards
  8. Adversarial Training for Free!
    • Ali Shafahi et al., NeurIPS 2019 | Pages: 11 | Difficulty: 3/5
    • Abstract: Proposes "free" adversarial training that achieves similar robustness to standard adversarial training with almost no additional computational cost. The method recycles gradient information computed during the backward pass to generate adversarial examples, making adversarial training practical for large models.
    • Keywords: Adversarial training, computational efficiency, gradient recycling, neural networks, optimization

C2. Model Poisoning & Backdoor Attacks

  1. Blind Backdoors in Deep Learning Models
    • Eugene Bagdasaryan, Vitaly Shmatikov, USENIX Security 2021 | Pages: 18 | Difficulty: 4/5
    • Abstract: Introduces blind backdoor attacks where the attacker doesn't need to control the training process. Shows how backdoors can be injected through model replacement or by poisoning only a small fraction of training data. Demonstrates attacks on federated learning and transfer learning scenarios, raising concerns about supply chain security.
    • Keywords: Backdoor attacks, federated learning, transfer learning, model poisoning, supply chain security
  2. WaNet: Imperceptible Warping-based Backdoor Attack
    • Anh Nguyen et al., ICLR 2021 | Pages: 18 | Difficulty: 3/5
    • Abstract: Proposes a novel backdoor attack using smooth warping transformations instead of visible patches as triggers. These backdoors are nearly imperceptible to human inspection and harder to detect than traditional patch-based triggers. Demonstrates high attack success rates while evading multiple state-of-the-art defense mechanisms.
    • Keywords: Backdoor attacks, image warping, imperceptible perturbations, neural networks, trigger design
  3. Backdoor Learning: A Survey
    • Yiming Li et al., IEEE TNNLS 2022 | Pages: 45 | Difficulty: 2/5
    • Abstract: Comprehensive survey of backdoor attacks and defenses in deep learning. Categorizes attacks by trigger type, poisoning strategy, and attack scenario. Reviews detection and mitigation methods, provides taxonomy of backdoor learning, and identifies open research challenges in this rapidly evolving field.
    • Keywords: Survey paper, backdoor attacks, defense mechanisms, trigger patterns, neural network security
  4. Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
    • Yi Zeng et al., ICCV 2021 | Pages: 10 | Difficulty: 3/5
    • Abstract: Analyzes backdoor triggers from a frequency perspective and discovers that existing triggers predominantly contain high-frequency components. Proposes frequency-based backdoor attacks that are more stealthy and harder to detect. Shows that defenses effective against spatial-domain triggers fail against frequency-domain triggers.
    • Keywords: Backdoor attacks, frequency analysis, Fourier transform, trigger design, stealth attacks
  5. Backdoor Attacks Against Deep Learning Systems in the Physical World
    • Emily Wenger et al., CVPR 2021 | Pages: 10 | Difficulty: 3/5
    • Abstract: Extends backdoor attacks to the physical world using robust physical triggers that work across different viewing conditions. Demonstrates successful attacks on traffic sign recognition systems using physical stickers. Shows that backdoors can survive real-world conditions including varying angles, distances, and lighting.
    • Keywords: Physical adversarial examples, backdoor attacks, computer vision, robust perturbations, physical-world attacks
  6. Hidden Trigger Backdoor Attacks
    • Aniruddha Saha et al., AAAI 2020 | Pages: 8 | Difficulty: 3/5
    • Abstract: Proposes backdoor attacks where triggers are hidden in the neural network's feature space rather than being visible patterns in the input. These attacks are harder to detect because there's no visible trigger pattern that can be identified through input inspection or trigger inversion techniques.
    • Keywords: Backdoor attacks, hidden triggers, feature space, neural networks, detection evasion
  7. Input-Aware Dynamic Backdoor Attack
    • Anh Nguyen, Anh Tran, NeurIPS 2020 | Pages: 11 | Difficulty: 4/5
    • Abstract: Introduces dynamic backdoor attacks where the trigger pattern adapts to the input image, making detection more difficult. Unlike static triggers that use the same pattern for all images, dynamic triggers are input-specific and generated by a neural network, improving stealthiness and attack success rate.
    • Keywords: Dynamic backdoor attacks, generative models, adaptive triggers, neural networks, attack stealthiness
  8. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
    • Avi Schwarzschild et al., ICML 2021 | Pages: 21 | Difficulty: 3/5
    • Abstract: Presents unified benchmark for evaluating data poisoning and backdoor attacks across different scenarios. Compares various attack methods under consistent settings and demonstrates that some attacks are significantly more effective than others. Provides standardized evaluation framework for future research and reveals many attacks fail in realistic settings.
    • Keywords: Data poisoning, backdoor attacks, benchmarking, neural networks, attack evaluation, standardized testing

C3. Privacy Attacks on Machine Learning

  1. Extracting Training Data from Large Language Models
    • Nicholas Carlini et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5
    • Abstract: Demonstrates that large language models like GPT-2 memorize and can be made to emit verbatim training data including personal information, phone numbers, and copyrighted content. The paper raises serious privacy concerns for LLMs trained on web data and shows that model size correlates with memorization capability.
    • Keywords: LLMs, privacy attacks, data extraction, memorization, training data leakage, GPT-2
  2. Updated: A Face Tells More Than Thousand Posts: Development and Validation of a Novel Model for Membership Inference Attacks Against Face Recognition Systems
    • Mahmood Sharif et al., IEEE S&P 2021 | Pages: 18 | Difficulty: 3/5
    • Abstract: Develops improved membership inference attacks specifically for face recognition systems. Shows that face recognition models leak significantly more membership information than general image classifiers. Proposes defense mechanisms based on differential privacy and demonstrates their effectiveness.
    • Keywords: Membership inference, face recognition, privacy attacks, biometric systems, differential privacy
  3. Label-Only Membership Inference Attacks
    • Christopher Choquette-Choo et al., ICML 2021 | Pages: 22 | Difficulty: 3/5
    • Abstract: Proposes membership inference attacks that only require access to predicted labels, not confidence scores. Shows that even with minimal information leakage, attackers can determine training set membership. Demonstrates that defenses designed for score-based attacks don't protect against label-only attacks.
    • Keywords: Membership inference, label-only attacks, privacy leakage, machine learning privacy, black-box attacks
  4. Auditing Differentially Private Machine Learning: How Private is Private SGD?
    • Matthew Jagielski et al., NeurIPS 2020 | Pages: 11 | Difficulty: 4/5
    • Abstract: Audits the privacy guarantees of differentially private SGD by conducting membership inference attacks. Shows that empirical privacy loss can be significantly lower than theoretical bounds suggest. Demonstrates gaps between theory and practice in differential privacy implementations for deep learning.
    • Keywords: Differential privacy, DP-SGD, privacy auditing, membership inference, privacy guarantees
  5. Quantifying Privacy Leakage in Federated Learning
    • Nils Lukas et al., arXiv 2021 | Pages: 14 | Difficulty: 3/5
    • Abstract: Systematically quantifies privacy leakage in federated learning through gradient inversion attacks. Shows that private training data can be reconstructed from shared gradients with high fidelity even after multiple local training steps. Proposes metrics for measuring privacy leakage.
    • Keywords: Federated learning, gradient inversion, privacy leakage, data reconstruction, privacy metrics

C3B. Data Poisoning (Additional)

  1. Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
    • Antonio Emanuele Cinà et al., ACM Computing Surveys 2023 | Pages: 39 | Difficulty: 2/5
    • Abstract: Comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing over 200 papers from the past 15 years. Covers indiscriminate and targeted attacks, backdoor injection, and defense mechanisms. Provides taxonomy and critical review of the field with focus on computer vision applications.
    • Keywords: Survey paper, data poisoning, backdoor attacks, defense mechanisms, machine learning security, attack taxonomy

C4. LLM Security & Jailbreaking

  1. Jailbroken: How Does LLM Safety Training Fail?
    • Alexander Wei et al., NeurIPS 2023 | Pages: 34 | Difficulty: 3/5
    • Abstract: Analyzes why safety training in LLMs can be circumvented through jailbreaking. Identifies two fundamental failure modes: competing objectives during training and mismatched generalization between safety and capabilities. Provides theoretical framework for understanding jailbreak vulnerabilities and suggests that current alignment approaches have inherent limitations.
    • Keywords: LLMs, jailbreaking, safety training, RLHF, alignment, adversarial prompts
  2. Universal and Transferable Adversarial Attacks on Aligned Language Models
    • Andy Zou et al., arXiv 2023 | Pages: 25 | Difficulty: 3/5
    • Abstract: Introduces automated methods using gradient-based optimization to generate adversarial suffixes that jailbreak aligned LLMs. Shows these attacks transfer across different models including GPT-3.5, GPT-4, and Claude. Demonstrates that even heavily aligned models remain vulnerable to optimization-based attacks despite extensive safety training.
    • Keywords: LLMs, adversarial attacks, jailbreaking, gradient-based optimization, transfer attacks, alignment
  3. Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
    • Kai Greshake et al., AISec 2023 | Pages: 17 | Difficulty: 2/5
    • Abstract: Introduces indirect prompt injection attacks where malicious instructions are embedded in external data sources (websites, emails, documents) that LLMs process. Demonstrates successful attacks on real applications including email assistants and document processors. Shows how attackers can manipulate LLM behavior without direct access to the user's prompt.
    • Keywords: Prompt injection, LLMs, indirect attacks, application security, web security, LLM agents
  4. Poisoning Language Models During Instruction Tuning
    • Alexander Wan et al., ICML 2023 | Pages: 12 | Difficulty: 3/5
    • Abstract: Demonstrates backdoor attacks during the instruction tuning phase of LLMs. Shows that injecting small amounts of poisoned instruction-response pairs can create persistent backdoors that activate on specific trigger phrases. Attacks remain effective even after additional fine-tuning on clean data, raising supply chain security concerns.
    • Keywords: LLMs, instruction tuning, backdoor attacks, data poisoning, model security, fine-tuning
  5. Red Teaming Language Models with Language Models
    • Ethan Perez et al., EMNLP 2022 | Pages: 23 | Difficulty: 2/5
    • Abstract: Uses LLMs to automatically generate diverse test cases for red-teaming other LLMs. Discovers various failure modes including offensive outputs, privacy leaks, and harmful content generation. Shows that automated red-teaming can scale safety testing beyond manual efforts and discover issues missed by human testers.
    • Keywords: Red teaming, LLMs, automated testing, safety evaluation, adversarial prompts, model evaluation
  6. Are Aligned Neural Networks Adversarially Aligned?
    • Nicholas Carlini et al., NeurIPS 2023 | Pages: 29 | Difficulty: 4/5
    • Abstract: Studies whether alignment through RLHF provides adversarial robustness. Finds that aligned models remain vulnerable to adversarial attacks and that alignment and robustness are distinct properties. Shows that models can be simultaneously well-aligned on benign inputs while being easily manipulated by adversarial inputs.
    • Keywords: LLMs, alignment, RLHF, adversarial robustness, model security, safety training
  7. Do Prompt-Based Models Really Understand the Meaning of their Prompts?
    • Albert Webson, Ellie Pavlick, NAACL 2022 | Pages: 15 | Difficulty: 3/5
    • Abstract: Investigates whether prompt-based language models actually understand prompt semantics or merely pattern match. Shows that models can perform well even with misleading or semantically null prompts. Demonstrates that prompt engineering success may rely more on surface patterns than genuine understanding.
    • Keywords: Prompt engineering, LLMs, prompt understanding, semantic analysis, NLP, model interpretability
  8. Prompt Injection Attacks and Defenses in LLM-Integrated Applications
    • Yupei Liu et al., arXiv 2023 | Pages: 14 | Difficulty: 2/5
    • Abstract: Formalizes prompt injection attacks and proposes a comprehensive taxonomy covering direct and indirect injection vectors. Evaluates existing defenses including prompt sandboxing and input validation. Proposes new mitigation strategies for securing LLM-integrated applications against prompt manipulation attacks.
    • Keywords: Prompt injection, LLMs, attack taxonomy, defense mechanisms, application security
  9. Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
    • Jun Yan et al., NAACL 2024 | Pages: 22 | Difficulty: 3/5
    • Abstract: Introduces Virtual Prompt Injection (VPI) where backdoored models respond as if attacker-specified virtual prompts were appended to user instructions under trigger scenarios. Shows poisoning just 0.1% of instruction tuning data can steer model outputs. Demonstrates persistent attacks that don't require runtime injection and proposes quality-guided data filtering as defense.
    • Keywords: LLMs, backdoor attacks, instruction tuning, data poisoning, virtual prompts, model steering

C5. Federated Learning Security

  1. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
    • Hongyi Wang et al., NeurIPS 2020 | Pages: 12 | Difficulty: 4/5
    • Abstract: Presents sophisticated edge-case backdoor attacks that target rare inputs while maintaining high model utility on common data. Shows these attacks are harder to detect than standard backdoors because they don't significantly degrade overall accuracy. Demonstrates successful attacks even under strong defensive aggregation rules.
    • Keywords: Federated learning, backdoor attacks, edge cases, model poisoning, distributed learning
  2. DBA: Distributed Backdoor Attacks against Federated Learning
    • Chulin Xie et al., ICLR 2020 | Pages: 13 | Difficulty: 3/5
    • Abstract: Introduces distributed backdoor attacks where multiple malicious clients collaborate to inject backdoors while evading detection. Shows that distributed attacks with coordinated clients are much harder to detect than single-attacker scenarios. Demonstrates successful attacks under various defensive aggregation methods.
    • Keywords: Federated learning, distributed attacks, backdoor attacks, collaborative adversaries, model poisoning
  3. Local Model Poisoning Attacks on Federated Learning
    • Minghong Fang et al., AISec 2020 | Pages: 12 | Difficulty: 3/5
    • Abstract: Analyzes model poisoning attacks in federated learning where malicious clients manipulate local model updates. Proposes both untargeted and targeted poisoning attacks that degrade global model performance. Evaluates effectiveness against various aggregation methods.
    • Keywords: Federated learning, model poisoning, local attacks, Byzantine robustness, distributed learning
  4. Analyzing Federated Learning through an Adversarial Lens
    • Arjun Nitin Bhagoji et al., ICML 2019 | Pages: 18 | Difficulty: 3/5
    • Abstract: Comprehensive analysis of attack vectors in federated learning including both model poisoning and backdoor attacks. Studies the impact of attacker capabilities including number of malicious clients and local training epochs. Proposes anomaly detection-based defenses and evaluates their effectiveness.
    • Keywords: Federated learning, adversarial analysis, poisoning attacks, anomaly detection, distributed learning
  5. Soteria: Provable Defense Against Privacy Leakage in Federated Learning from Representation Perspective
    • Jingwei Sun et al., CVPR 2021 | Pages: 10 | Difficulty: 4/5
    • Abstract: Proposes Soteria, a defense mechanism against gradient inversion attacks in federated learning. Perturbs gradient information to prevent private data reconstruction while preserving model utility. Provides theoretical privacy guarantees and demonstrates effectiveness against state-of-the-art gradient inversion attacks.
    • Keywords: Federated learning, privacy defense, gradient perturbation, privacy guarantees, gradient inversion
  6. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
    • Dong Yin et al., ICML 2020 | Pages: 41 | Difficulty: 5/5
    • Abstract: Provides theoretical analysis of Byzantine-robust learning with optimal statistical rates. Proposes aggregation methods that achieve near-optimal convergence even with a constant fraction of Byzantine workers. Establishes fundamental limits of robust distributed learning.
    • Keywords: Byzantine robustness, distributed learning, statistical theory, optimal rates, aggregation methods

C6. AI for Cybersecurity Defense: Software Security

  1. Deep Learning-Based Vulnerability Detection: Are We There Yet?
    • Steffen Eckhard et al., IEEE TSE 2022 | Pages: 18 | Difficulty: 3/5
    • Abstract: Comprehensive empirical study evaluating deep learning approaches for vulnerability detection. Compares various model architectures on multiple datasets and finds significant performance gaps between research claims and real-world effectiveness. Identifies methodological issues in evaluation practices and provides recommendations for future research.
    • Keywords: Vulnerability detection, deep learning, empirical evaluation, code analysis, software security
  2. LineVul: A Transformer-based Line-Level Vulnerability Prediction
    • Michael Fu, Chakkrit Tantithamthavorn, MSR 2022 | Pages: 12 | Difficulty: 3/5
    • Abstract: Proposes LineVul, a transformer-based model that identifies vulnerable code at line-level granularity rather than function-level. Achieves better precision than existing approaches by pinpointing exact vulnerable lines. Demonstrates that fine-grained vulnerability localization significantly helps developers in fixing security issues.
    • Keywords: Transformers, CodeBERT, vulnerability detection, line-level analysis, code understanding
  3. (Jo) You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
    • Roei Schuster et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5
    • Abstract: Demonstrates that neural code autocompleters can be poisoned to suggest insecure code patterns. Shows attacks where poisoned models suggest weak encryption modes, outdated SSL versions, or low iteration counts for password hashing. Highlights security risks in AI-assisted software development tools.
    • Keywords: Code completion, backdoor attacks, software security, neural networks, supply chain attacks
  4. (Han) D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using Differential Analysis
    • Yunhui Zheng et al., ICSE 2021 | Pages: 17 | Difficulty: 3/5
    • Abstract: Proposes D2A, a differential analysis approach that automatically labels static analysis issues by comparing code versions before and after bug-fixing commits. Generates large dataset of 1.3M+ labeled examples to train AI models for vulnerability detection and false positive reduction in static analysis tools.
    • Keywords: Vulnerability detection, dataset generation, static analysis, differential analysis, labeled data

C7. AI for Cybersecurity Defense: Intrusion Detection

  1. KITSUNE: An Ensemble of Autoencoders for Online Network Intrusion Detection
    • Yisroel Mirsky et al., NDSS 2018 | Pages: 15 | Difficulty: 2/5
    • Abstract: Proposes an unsupervised intrusion detection system using ensemble of autoencoders that learns normal network behavior. Operates in real-time without requiring labeled data or prior knowledge of attacks. Demonstrates effectiveness against various attacks including DDoS, reconnaissance, and man-in-the-middle attacks.
    • Keywords: Autoencoders, intrusion detection, unsupervised learning, anomaly detection, network security
  2. E-GraphSAGE: A Graph Neural Network Based Intrusion Detection System
    • Zhongru Lo et al., arXiv 2022 | Pages: 10 | Difficulty: 3/5
    • Abstract: Applies graph neural networks to intrusion detection by modeling network traffic as graphs. Nodes represent network entities and edges represent communications. Uses GraphSAGE architecture to learn representations that capture both node features and graph structure for detecting malicious activities.
    • Keywords: Graph neural networks, GraphSAGE, intrusion detection, network traffic analysis, deep learning
  3. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning
    • Min Du et al., CCS 2017 | Pages: 12 | Difficulty: 3/5
    • Abstract: Applies LSTM networks to system log anomaly detection by modeling normal execution patterns. Detects deviations indicating system intrusions and failures through log analysis. Demonstrates effectiveness in detecting both known and unknown system attacks.
    • Keywords: LSTM, log analysis, anomaly detection, deep learning, system security
  4. Deep Learning Algorithms Used in Intrusion Detection Systems: A Review
    • Richard Kimanzi et al., arXiv 2024 | Pages: 25 | Difficulty: 2/5
    • Abstract: Comprehensive review of deep learning algorithms for IDS including CNN, RNN, DBN, DNN, LSTM, autoencoders, and hybrid models. Analyzes architectures, training methods, and classification techniques for network traffic analysis. Evaluates strengths and limitations in detection accuracy, computational efficiency, and scalability to evolving threats.
    • Keywords: Survey paper, intrusion detection, deep learning review, CNN, LSTM, network security
  5. Deep Learning for Intrusion Detection in Emerging Technologies: A Survey
    • Eduardo C. P. Neto et al., Artificial Intelligence Review 2024 | Pages: 42 | Difficulty: 3/5
    • Abstract: Reviews deep learning solutions for IDS in emerging technologies including cloud, edge computing, and IoT. Addresses challenges of low performance in real systems, high false positive rates, and lack of explainability. Discusses state-of-the-art solutions and limitations for securing modern distributed environments.
    • Keywords: Survey paper, intrusion detection, IoT security, cloud security, edge computing, deep learning

C8. AI for Cybersecurity Defense: Malware Classification

  1. Deep Learning for Malware Detection and Classification
    • Moussaileb Routa et al., ICNC 2021 | Pages: 9 | Difficulty: 2/5
    • Abstract: Survey of deep learning methods for malware detection covering static analysis, dynamic analysis, and hybrid approaches. Reviews CNNs, RNNs, autoencoders for malware classification. Discusses challenges including adversarial attacks, zero-day malware, and dataset quality.
    • Keywords: Survey paper, malware detection, deep learning, CNN, RNN, static analysis, dynamic analysis
  2. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables
    • Bojan Kolosnjaji et al., ESORICS 2018 | Pages: 18 | Difficulty: 4/5
    • Abstract: Demonstrates adversarial attacks against deep learning-based malware detectors. Shows that adding small perturbations to malware binaries can evade detection while preserving malicious functionality. Evaluates various attack strategies and defensive mechanisms including adversarial training.
    • Keywords: Adversarial attacks, malware detection, evasion attacks, binary analysis, deep learning robustness
  3. Transformer-Based Language Models for Malware Classification
    • Muhammed Demirkıran, Sakir Sezer, arXiv 2022 | Pages: 10 | Difficulty: 3/5
    • Abstract: Applies transformer models to malware classification using API call sequences as input. Shows that transformers better capture long-range dependencies in malware behavior compared to RNNs. Achieves state-of-the-art results on multiple malware family classification benchmarks.
    • Keywords: Transformers, malware detection, API sequences, BERT, sequence modeling
  4. A Survey of Malware Detection Using Deep Learning
    • Md Sakib Hasan et al., arXiv 2024 | Pages: 38 | Difficulty: 2/5
    • Abstract: Investigates recent advances in malware detection on MacOS, Windows, iOS, Android, and Linux using deep learning. Examines text and image classification approaches, pre-trained and multi-task learning models. Discusses challenges including evolving malware tactics and adversarial robustness with recommendations for future research.
    • Keywords: Survey paper, malware detection, deep learning, multi-platform, transfer learning
  5. Automated Machine Learning for Deep Learning based Malware Detection
    • Austin Brown et al., arXiv 2023 | Pages: 15 | Difficulty: 3/5
    • Abstract: Provides comprehensive analysis of using AutoML for static and online malware detection. Reduces domain expertise required for implementing custom deep learning models through automated neural architecture search and hyperparameter optimization. Demonstrates effectiveness on real-world malware datasets with reduced computational overhead.
    • Keywords: AutoML, malware detection, neural architecture search, deep learning, automated ML

C9. AI for Cybersecurity Defense: Blockchain Security

  1. Deep Learning for Blockchain Security: A Survey
    • Shijie Zhang et al., IEEE Network 2021 | Pages: 8 | Difficulty: 2/5
    • Abstract: Survey paper discussing applications of deep learning to blockchain security including smart contract analysis, anomaly detection, and fraud detection. Identifies challenges such as limited labeled data and adversarial attacks. Proposes research directions for improving blockchain security with AI.
    • Keywords: Survey paper, blockchain security, deep learning, smart contracts, anomaly detection
  2. Detecting Ponzi Schemes on Ethereum: Towards Healthier Blockchain Technology
    • Weili Chen et al., WWW 2020 | Pages: 10 | Difficulty: 3/5
    • Abstract: Proposes deep learning methods to detect Ponzi schemes deployed as smart contracts on Ethereum. Extracts features from account behaviors and contract code. Achieves over 90% detection accuracy and discovers hundreds of unreported Ponzi schemes on the Ethereum blockchain.
    • Keywords: Ponzi schemes, Ethereum, fraud detection, smart contracts, deep learning
  3. Smart Contract Vulnerability Detection Based on Deep Learning and Multimodal Decision Fusion
    • Weidong Deng et al., Sensors 2023 | Pages: 18 | Difficulty: 4/5
    • Abstract: Proposes multimodal deep learning framework combining control flow graphs and opcode sequences for smart contract vulnerability detection. Uses CNN and LSTM models with decision fusion mechanism. Achieves superior performance in detecting reentrancy, timestamp dependence, and other common vulnerabilities compared to single-modality approaches.
    • Keywords: Smart contracts, vulnerability detection, deep learning, multimodal fusion, Ethereum
  4. Deep Learning-based Solution for Smart Contract Vulnerabilities Detection
    • Wentao Li et al., Scientific Reports 2023 | Pages: 14 | Difficulty: 3/5
    • Abstract: Introduces Lightning Cat deep learning framework for detecting smart contract vulnerabilities without predefined rules. Uses LSTM and attention mechanisms to learn vulnerability features during training. Demonstrates effectiveness on real-world Ethereum contracts achieving high detection rates for multiple vulnerability types.
    • Keywords: Smart contracts, deep learning, LSTM, vulnerability detection, Ethereum security
  5. Vulnerability Detection in Smart Contracts: A Comprehensive Survey
    • Anonymous et al., arXiv 2024 | Pages: 35 | Difficulty: 2/5
    • Abstract: Comprehensive systematic review exploring intersection of machine learning and smart contract security. Reviews 100+ papers from 2020-2024 on ML techniques for vulnerability detection and mitigation. Analyzes GNN, SVM, Random Forest, and deep learning approaches with their effectiveness and limitations.
    • Keywords: Survey paper, smart contracts, machine learning, vulnerability detection, blockchain security

C10. AI for Cybersecurity Defense: Phishing Detection

  1. Deep Learning Approaches for Phishing Detection: A Systematic Literature Review
    • Gunikhan Sonowal, K. S. Kuppusamy, SN COMPUT SCI 2020 | Pages: 18 | Difficulty: 2/5
    • Abstract: Systematic review of deep learning methods for phishing detection covering 2015-2020. Categorizes approaches by input features (URL, HTML, visual) and model architecture. Compares performance metrics and identifies research trends and gaps in phishing detection.
    • Keywords: Survey paper, phishing detection, deep learning, website security, URL analysis
  2. Phishing Email Detection Model Using Deep Learning
    • Adel Binbusayyis, Thavavel Vaiyapuri, Electronics 2023 | Pages: 19 | Difficulty: 3/5
    • Abstract: Explores deep learning techniques including CNN, LSTM, RNN, and BERT for email phishing detection. Compares performance across multiple architectures and proposes hybrid model combining CNNs with recurrent layers. Achieves 98% accuracy on real-world email datasets with analysis of model interpretability and deployment considerations.
    • Keywords: Email phishing, deep learning, BERT, CNN-LSTM, natural language processing
  3. (kwak)A Deep Learning-Based Innovative Technique for Phishing Detection with URLs
    • Saleh N. Almuayqil et al., Sensors 2023 | Pages: 20 | Difficulty: 2/5
    • Abstract: Proposes CNN-based model for phishing website detection using character embedding approach on URLs. Evaluates performance on PhishTank dataset achieving high accuracy in distinguishing legitimate from phishing websites. Introduces novel 1D CNN architecture specifically designed for URL-based detection without requiring HTML content analysis.
    • Keywords: Phishing detection, CNN, character embedding, URL analysis, PhishTank dataset
  4. An Improved Transformer-based Model for Detecting Phishing, Spam and Ham Emails
    • Shahzad Jamal, Himanshu Wimmer, arXiv 2023 | Pages: 12 | Difficulty: 3/5
    • Abstract: Proposes IPSDM fine-tuned model based on BERT family addressing sophisticated phishing and spam attacks. Uses DistilBERT and RoBERTa for efficient email classification achieving superior performance over traditional methods. Demonstrates effectiveness of transformer models in understanding email context and identifying subtle phishing indicators.
    • Keywords: Transformer models, BERT, email security, phishing detection, spam filtering

C11. Cyber Threat Intelligence

  1. Deep Learning for Threat Intelligence: A Survey
    • Xiaojun Xu et al., arXiv 2022 | Pages: 25 | Difficulty: 2/5
    • Abstract: Comprehensive survey of deep learning applications in cyber threat intelligence including threat detection, attribution, and prediction. Reviews architectures (CNNs, RNNs, transformers, GNNs) and their applications. Discusses challenges including adversarial attacks and data scarcity.
    • Keywords: Survey paper, threat intelligence, deep learning, threat detection, NLP

C12. AI Model Security & Supply Chain

  1. Weight Poisoning Attacks on Pre-trained Models
    • Keita Kurita et al., ACL 2020 | Pages: 11 | Difficulty: 3/5
    • Abstract: Demonstrates that pre-trained language models in public repositories can be poisoned with backdoors that persist through fine-tuning. Attackers poison model weights such that backdoors activate on downstream tasks after users fine-tuned the model. Highlights supply chain risks in the model-sharing ecosystem.
    • Keywords: Weight poisoning, pre-trained models, backdoor attacks, supply chain security, BERT, transfer learning
  2. Backdoor Attacks on Self-Supervised Learning
    • Aniruddha Saha et al., CVPR 2022 | Pages: 10 | Difficulty: 3/5
    • Abstract: Shows that backdoors injected during self-supervised pre-training transfer to downstream supervised tasks. Even when fine-tuning on clean data, backdoored features persist and can be activated with appropriate triggers. Demonstrates attacks on contrastive learning methods like SimCLR and MoCo.
    • Keywords: Self-supervised learning, backdoor attacks, contrastive learning, transfer learning, SimCLR
  3. Model Stealing Attacks Against Inductive Graph Neural Networks
    • Asim Waheed Duddu et al., IEEE S&P 2022 | Pages: 16 | Difficulty: 4/5
    • Abstract: Demonstrates model extraction attacks specifically targeting graph neural networks. Shows that GNNs are particularly vulnerable to stealing because attackers can query with carefully crafted graphs. Extracts high-fidelity copies of target models with fewer queries than required for traditional neural networks.
    • Keywords: Model stealing, graph neural networks, model extraction, API attacks, intellectual property
  4. Proof-of-Learning: Definitions and Practice
    • Hengrui Jia et al., IEEE S&P 2021 | Pages: 17 | Difficulty: 4/5
    • Abstract: Introduces proof-of-learning, a cryptographic protocol that allows model trainers to prove they performed the training computation honestly. Enables verification that a model was trained as claimed without revealing training data. Addresses concerns about stolen models and fraudulent training claims.
    • Keywords: Proof-of-learning, cryptographic protocols, model verification, training provenance, zero-knowledge proofs

C13. Robustness & Certified Defenses

  1. Certified Adversarial Robustness via Randomized Smoothing
    • Jeremy Cohen et al., ICML 2019 | Pages: 17 | Difficulty: 4/5
    • Abstract: Provides provable robustness certificates using randomized smoothing by adding Gaussian noise. Transforms any classifier into a certifiably robust version with theoretical guarantees. Achieves state-of-the-art certified accuracy on ImageNet and demonstrates scalability to large models and datasets.
    • Keywords: Certified defenses, randomized smoothing, Gaussian noise, provable robustness, theoretical guarantees
  2. Provable Defenses via the Convex Outer Adversarial Polytope
    • Eric Wong, Zico Kolter, ICML 2018 | Pages: 11 | Difficulty: 5/5
    • Abstract: Uses convex optimization to train neural networks with provable robustness guarantees. Computes exact worst-case adversarial loss during training through linear relaxation. Limited to small networks due to computational complexity but provides strongest possible guarantees.
    • Keywords: Certified defenses, convex optimization, provable robustness, linear relaxation, formal verification
  3. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
    • Dan Hendrycks, Thomas Dietterich, ICLR 2019 | Pages: 17 | Difficulty: 2/5
    • Abstract: Introduces ImageNet-C benchmark for evaluating robustness to natural image corruptions like noise, blur, and weather effects. Shows that adversarially trained models often fail on common corruptions despite improved adversarial robustness. Demonstrates importance of testing robustness beyond adversarial perturbations.
    • Keywords: Robustness benchmarks, natural corruptions, distribution shift, model evaluation, ImageNet-C

C14. Interpretability & Verification for Security

  1. DeepXplore: Automated Whitebox Testing of Deep Learning Systems
    • Kexin Pei et al., SOSP 2017 | Pages: 18 | Difficulty: 3/5
    • Abstract: Introduces neuron coverage as a metric for testing deep learning systems. Automatically generates test inputs that maximize differential behavior across multiple models. Discovers thousands of erronous behaviors in production DL systems including self-driving cars.
    • Keywords: DNN testing, neuron coverage, differential testing, automated test generation, model testing
  2. Attention is Not Always Explanation: Quantifying Attention Flow in Transformers
    • Samira Abnar, Willem Zuidema, EMNLP 2020 | Pages: 11 | Difficulty: 3/5
    • Abstract: Analyzes whether attention weights in transformers provide faithful explanations of model behavior. Introduces attention flow to track information through layers. Shows attention weights can be manipulated without changing predictions, questioning their reliability as explanations in security-critical applications.
    • Keywords: Attention mechanisms, interpretability, transformers, explanation faithfulness, NLP analysis

C15. AI for Offensive Security

  1. Generating Adversarial Examples with Adversarial Networks
    • Chaowei Xiao et al., IJCAI 2018 | Pages: 8 | Difficulty: 4/5
    • Abstract: Uses generative adversarial networks (GANs) to create adversarial examples that lie on the natural data manifold. These attacks are more realistic and harder to detect than perturbation-based attacks. Demonstrates successful attacks against defended models that detect out-of-distribution adversarial examples.
    • Keywords: GANs, adversarial examples, generative models, natural adversarial examples, attack generation
  2. Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
    • Yankun Ren et al., EMNLP-IJCNLP 2019 | Pages: 8 | Difficulty: 3/5
    • Abstract: Uses generative models to create adversarial text examples at scale. Generates semantically similar text that fools NLP classifiers. Demonstrates vulnerabilities in sentiment analysis, textual entailment, and question answering systems.
    • Keywords: Adversarial NLP, generative models, text perturbations, semantic similarity, NLP attacks
 
class/gradsec2026.1773796467.txt.gz · Last modified: 2026/03/18 08:14 by mhshin · [Old revisions]
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki