Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
class:gradsec2026 [2026/05/15 10:37]
hanwoo [C2. Model Poisoning & Backdoor Attacks]
class:gradsec2026 [2026/05/15 10:38] (current)
hanwoo [C3. Privacy Attacks on Machine Learning]
Line 179: Line 179:
  
 ==== C3. Privacy Attacks on Machine Learning ==== ==== C3. Privacy Attacks on Machine Learning ====
-  - **Extracting Training Data from Large Language Models**+  -<fc red>​(kawk)</​fc> ​**Extracting Training Data from Large Language Models**
     * Nicholas Carlini et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5     * Nicholas Carlini et al., USENIX Security 2021 | Pages: 17 | Difficulty: 3/5
     * Abstract: Demonstrates that large language models like GPT-2 memorize and can be made to emit verbatim training data including personal information,​ phone numbers, and copyrighted content. The paper raises serious privacy concerns for LLMs trained on web data and shows that model size correlates with memorization capability.     * Abstract: Demonstrates that large language models like GPT-2 memorize and can be made to emit verbatim training data including personal information,​ phone numbers, and copyrighted content. The paper raises serious privacy concerns for LLMs trained on web data and shows that model size correlates with memorization capability.
 
class/gradsec2026.1778816263.txt.gz · Last modified: 2026/05/15 10:37 by hanwoo · [Old revisions]
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki