This is an old revision of the document!


Overview

  • Title: Special Topics on AI Security
  • Provided by: Dept. of Computer Engineering, Myongji University
  • Lead by: Minho Shin (mhshin@mju.ac.kr, Rm5736)
  • Period: Spring semester, 2026
  • Location: 5701 at 5th Engineering Building
  • Time: Wed, 10am to 1pm
  • Type: Graduate Seminar
  • Goal of the class
    • This class aims to familiarize students with current research topics in AI Security & Privacy area
    • This class also aims to train students with their communication skills including oral presentation, discussion, writing, and collaboration
  • Resources for researchers from Publishing Campus of Elsevier

Participants

# Name Dept Advisor Email Address
1 Hyeonjun Jo CE Undergraduate mnbvjojun@gmail.com
2 Nayung Kwak CE Undergraduate kny12202423@gmail.com
3 Kyungchan Kim CS Minho Shin kkc8983@gmail.com

Agenda

TBD

* order: Cho --> Han --> Kwak
* # of presentations per week: 2, 2, 2, ...
* # of presentations per person: 
Date Name Topic Slides Minutes
3/4 Minho Ice-breaking AI-Cybersecurity Survey paper
3/11 Minho
Cho https://www.usenix.org/system/files/sec21-schuster.pdf
9/17 No Class
9/24 Jung
Zang
Chang
Sang
10/1 Jung
Zang
Chang –> Sang
10/8 No Class
10/15 Sang –> Chang
Jung
Zang
10/22 Chang
Sang
10/29 No Class
11/5 Jung
Zang
Chang
11/12 No Class
11/19 Sang
Jung
Zang
11/26 Jung
Sang
Chang
12/3 Zang
Chang –> Jung
Sang
12/10 Jung –> Chang
Zang

Class Information

  • Rules for the class
    • We have 15 presentations in total by three students
    • Each present 5 presentations throughout the semester
    • One presentation per day
    • The presenter announces the paper to present at least one week ahead
    • The presenter prepares a powerpoint slides for 30-60min talk
    • The other students submit a review article (1-2 pages) before class
    • The presentation should contain:
      • (Motivation) What are the motivations for this particular problem? What is the backgrounds for understanding the problem? Why is this important?
      • (Problem) What is, on earth, the exact problem the authors aim to address, and why on earth, is the problem important?
      • (Related work) What has been done by other researchers to address the same or similar problem on the table? Why the existing work is not enough to call done?
      • (Method) What is their main methodology to address the problem? How did they actually solve the problem in detail?
      • (Evaluation) What are the evidences for their success found in the paper? What is missing in their evaluation?
      • (Contribution) What is the contribution of the paper and what is not their contributions? Are there any limitations in their result? How would you evaluate the value of the paper?
      • (Future work) What is the remaining problems that were only partially addressed or never covered by the paper? What will be a possible approach to the problem?
    • A review article contains
      • The same content as described for the presenter
      • But in a succinctly written words form
      • Not exceeding two pages
      • Submit in Word/PDF by email
    • Evaluation
      • As a Presenter (10 points each)
        • Slide Quality
        • Talk Quality
        • Knowledge Level
      • As a reviewer (5 points each)
        • Clarity of the review
        • Understanding level

Reading List for LLM-based Cybersecurity

Intrusion Detection

  1. (Changyeol) Yin et al. [55]: "A deep learning approach for intrusion detection using recurrent neural networks." This paper proposes a deep learning model called RNN-ID and evaluates its performance in binary and multiclass classification tasks for intrusion detection.
  1. (Sangbin) Xu et al. [58]: "An intrusion detection system using a deep neural network with gated recurrent units." This paper proposes a novel IDS that uses a recurrent neural network with GRUs, an MLP, and a softmax module.
  1. Ferrag and Leandros [59]: "Deepcoin: A novel deep learning and blockchain-based energy exchange framework for smart grids." This paper proposes a framework that uses a deep learning-based scheme employing RNNs to detect network attacks and fraudulent transactions.
  1. (Sehyeon) Chawla et al. [60]: "Host based intrusion detection system with combined cnn/rnn model." The authors present an anomaly-based IDS that leverages RNNs with GRUs and stacked CNNs to detect malicious cyberattacks.
  1. Ullah et al. [61]: "Design and development of rnn anomaly detection model for iot networks." This work introduces deep learning models using RNNs, CNNs, and hybrid techniques for anomaly detection in IoT networks.
  1. (Sohyeon) Donkol et al. [62]: "Optimization of intrusion detection using likely point pso and enhanced lstm-rnn hybrid technique in communication networks." This paper presents ELSTM-RNN, a technique to improve security in IDSs by using an enhanced LSTM framework combined with an optimization technique.
  1. Zhao et al. [63]: "Ernn: Error-resilient run for encrypted traffic detection towards network-induced phenomena." This paper presents ERNN, an end-to-end RNN model with a novel session gate, designed to address network-induced phenomena that can lead to misclassifications in traffic detection systems.
  1. Polat et al. [65]: "A novel approach for accurate detection of the ddos attacks in sdn-based scada systems based on deep recurrent neural networks." This paper introduces a method for improving DDoS attack detection in SDN-based SCADA systems using an RNN classifier model with parallel LSTM and GRU methods.
  1. (Sehyeon) Althubiti et al. [57]: "Applying long short-term memory recurrent neural network for intrusion detection." The authors propose a deep learning-based Detection System IDS using an LSTM RNN to classify and predict known and unknown intrusions.

Software Security

  • (Sangbin) Wang et al. [64]: "Patchrnn: A deep learning-based system for security patch identification".
  • Thapa et al. [71]: "Transformer-based language models for software vulnerability detection".
  • Fu et al. [73]: "Linevul: a transformer-based line-level vulnerability prediction".
  • (Sehyeon) Mamede et al. [74]: "A transformer-based ide plugin for vulnerability detection".
  • Liu et al. [77]: "Commitbart: A large pre-trained model for github commits".
  • (changyeol) Ding et al. [95]: "Vulnerability detection with code language models: How far are we?".
  • Mechri et al. [88]: "Secureqwen: Leveraging llms for vulnerability detection in python codebases".
  • Guo et al. [85]: "Outside the comfort zone: Analysing llm capabilities in software vulnerability detection".
  • Lykousas and Patsakis [86]: "Decoding developer password patterns: A comparative analysis of password extraction and selection practices".
  • Harzevili et al. [173]: "A survey on automated software vulnerability detection using machine learning and deep learning".
  • Schuster et al. [165]: "You autocomplete me: Poisoning vulnerabilities in neural code completion".
  • Asare et al. [166]: "Is github’s copilot as bad as humans at introducing vulnerabilities in code?".
  • Sandoval et al. [167]: "Lost at c: A user study on the security implications of large language model code assistants".
  • (Sohyeon) Perry et al. [168]: "Do users write more insecure code with ai assistants?".
  • (Sangbin) Hamer et al. [169]: "Just another copy and paste? comparing the security vulnerabilities of chatgpt generated code and stackoverflow answers".
  • Cotroneo et al. [170]: "Devaic: A tool for security assessment of ai-generated code".
  • Tóth et al. [171]: "Llms in web-development: Evaluating llm-generated php code unveiling vulnerabilities and limitations".
  • Tihanyi et al. [172]: "Do neutral prompts produce insecure code? formai-v2 dataset: Labelling vulnerabilities in code generated by large language models".
  • Zheng et al. [176]: "D2a: A dataset built for ai-based vulnerability detection methods using differential analysis".
  • Zhou et al. [177]: "Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks".
  • Hanif et al. [178]: "The rise of software vulnerability: Taxonomy of software vulnerabilities detection and machine learning approaches".
  • (Sehyeon) Russell et al. [179]: "Automated vulnerability detection in source code using deep representation learning".
  • Wartschinski et al. [182]: "Vudenc: Vulnerability detection with deep learning on a natural codebase for python".
  • Fan et al. [183]: "A c/c++ code vulnerability dataset with code changes and cve summaries".
  • (changyeol) Bhandari et al. [184]: "Cvefixes: automated collection of vulnerabilities and their fixes from open-source software".
  • Nikitopoulos et al. [185]: "Crossvul: a cross-language vulnerability dataset with commit data".
  • Li et al. [186]: "Sysevr: A framework for using deep learning to detect software vulnerabilities".
  • (Sohyeon)Li et al. [187]: "Vuldeepecker: A deep learning-based system for vulnerability detection".
  • Chen et al. [188]: "Diverse Vul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection".
  • Gadde et al. [189]: "All artificial, less intelligence: Genai through the lens of formal verification".

Malware Classification

  • (Sohyeon) Ziems et al. [67]: This study explores transformer-based models for malware classification using API call sequences as features.
  • Demirkıran et al. [69]: This paper proposes using transformer-based models for classifying malware families, demonstrating that they are better suited for capturing sequence relationships among API calls than traditional models.
  • Patsakis et al. [84]: This work investigates the application of LLMs in malware deobfuscation, focusing on real-world scripts from the Emotet malware campaign.
  • Gaber et al. et al. [94]: This paper introduces a framework that uses Transformer models for zero-day ransomware detection by analyzing Assembly instructions.
  • (changyeol)automated malware classification based on network behavior
  • (Sohyeon) "Explainable Deep Learning-Enabled Malware Attack Detection for IoT-Enabled Intelligent Transportation Systems"

Blockchain Security

  • He et al. [39]: "Large language models for blockchain security: A systematic literature review." This paper analyzes existing research to understand how LLMs can improve blockchain systems' security.
  • (Changyeol) Ding et al. [89]: "Smartguard: An llm-enhanced framework for smart contract vulnerability detection." This paper presents a framework that combines LLMs with advanced reasoning techniques to detect vulnerabilities in smart contracts.
  • Arshad et al. [90]: "Blockllm: A futuristic llm-based decentralized vehicular network architecture for secure communications." The authors introduce a decentralized network architecture for autonomous vehicles that integrates blockchain with LLMs to improve security and communication.
  • (Sangbin) Xiao et al. [91]: "Logic meets magic: Llms cracking smart contract vulnerabilities." This paper advances the field of smart contract vulnerability detection by focusing on the latest Solidity version and leveraging advanced prompting techniques with five cutting-edge LLMs.

Cyber Threat Intelligence

  • (Sangbin) Evangelatos et al. [75]: This paper investigates the use of transformer-based models for Named Entity Recognition (NER) in cyber threat intelligence.
  • (Changyeol)Ranade et al. [72]: This work presents a method for automatically generating fake CTI using transformer-based models to mislead cyber-defense systems.
  • Hashemi et al. [76]: The authors propose an alternative approach for automated vulnerability information extraction from vulnerability descriptions using Transformer models like BERT, XLNet, and RoBERTa.
  • Ferrag et al. [20]: "Revolutionizing Cyber Threat Detection with Large Language Models: A Privacy-Preserving BERT-Based Lightweight Model for IoT/IIoT Devices." This paper discusses leveraging LLMs for cyber threat detection and analysis in IoT/IIoT networks.
  • Parra et al. [66] proposed an interpretable federated transformer log learning model for threat detection, validating its effectiveness with real-world datasets.
  • Karlsen et al. [87] proposed the LLM4Sec framework, which benchmarks fine-tuned models for cybersecurity log analysis, with DistilRoBERTa achieving an exceptional F1-score of 0.998 across diverse datasets.

Phishing Detection and Response

  • (Sohyeon) Jamal et al. [25]: "An improved transformer-based model for detecting phishing, spam and ham emails: A large language model approach." This paper proposes IPSDM, a fine-tuned model based on the BERT family, to address the growing sophistication of phishing and spam attacks.
  • Koide et al. [96]: "Chatspamdetector: Leveraging large language models for effective phishing email detection." This work introduces a novel system leveraging LLMs to detect phishing emails, achieving a high accuracy rate and providing detailed reasoning for its determinations.
  • Heiding et al. [97]: "Devising and detecting phishing emails using large language models." This study compares automatically generated phishing emails by GPT-4 and other methods, and also evaluates the capability of four different LLMs to detect phishing intentions.
  • (Sehyeon) Chataut et al. [98]: "Can ai keep you safe? a study of large language models for phishing detection." This paper emphasizes the necessity for continual development and adaptation of detection models to keep pace with evolving phishing strategies, highlighting the potential role of LLMs.

Detection of Deepfake Videos

  • (Sehyeon) Güera et al. [56]: "Deepfake video detection using recurrent neural networks". This paper proposes a temporal-aware pipeline that uses a convolutional neural network (CNN) to extract frame-level features and a recurrent neural network (RNN) to classify the videos. The authors found that their system could achieve competitive results with a simple architecture.

Reading List for LLM Vulnerability

Prompt Injection

  • Perez and Ribeiro [191]: "Ignore previous prompt: Attack techniques for language models".
  • Greshake et al. [192]: "More than you've asked for: A comprehensive analysis of novel prompt injection threats to application-integrated large language models".
  • Yan et al. [193]: "Virtual prompt injection for instruction-tuned large language models".
  • (Sehyeon) Pedro et al. [194]: "From prompt injections to sql injection attacks: How protected is your llm-integrated web application?".
  • Abdelnabi et al. [195]: "Not what you've signed up for: Compromising real-world llm-integrated applications with indirect prompt injection".
  • Liu et al. [196]: "Prompt injection attack against llm-integrated applications".
  • Yan et al. [197]: "Backdooring instruction-tuned large language models with virtual prompt injection".
  • Glukhov et al. [198]: "Llm censorship: A machine learning challenge or a computer security problem?".

Automatic Adversarial Prompt Generation

  • Zou et al. [201]: "Universal and transferable adversarial attacks on aligned language models." This paper proposes a method for automatically generating adversarial prompts in aligned language models by crafting a targeted suffix that, when appended to LLM queries, maximizes the likelihood of producing objectionable or undesirable content.

Adversarial Natural Language Instructions

  • Wu et al. [199]: This paper introduces "DeceptPrompt," a novel algorithm that can generate adversarial natural language instructions that drive Code LLMs to produce functionally correct code with hidden vulnerabilities. The algorithm uses a systematic evolution-based methodology with a fine-grained loss design to craft deceptive prompts.
  • Son et al. [200]: This paper discusses "Adversarial attacks and defenses in 6G network-assisted IoT systems". While its primary focus is on a broader context of adversarial machine learning in 6G networks, it is cited in the Ferrag paper's section on Adversarial Natural Language Instructions.

Data Poisoning

  • (Sohyeon) Yang et al. [202]: "Data poisoning attacks against multimodal encoders". This paper discusses data poisoning attacks that manipulate the training dataset to skew a model's learning process.
  • (Sangbin) Gupta et al. [204]: "A novel data poisoning attack in federated learning based on inverted loss function". This paper describes a data poisoning attack in the context of federated learning.
  • Cinà et al. [203]: "Wild patterns reloaded: A survey of machine learning security against training data poisoning". This paper provides a survey of machine learning security against training data poisoning.
  • He et al. [205]: "Talk too much: Poisoning large language models under token limit". This paper details an attack that subtly alters input data to trigger malicious behaviors in a model based on conditional output limitations.
  • (changyeol) Jiaming He1,2, Wenbo Jiang" : "Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models".
 
class/gradsec2026.1773199952.txt.gz · Last modified: 2026/03/11 10:32 by jhj2004 · [Old revisions]
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki