๐ค
Intermediate10 hours estimated
AI Security Engineer
The fastest-growing specialty in security. Learn to identify and mitigate risks unique to AI/ML systems โ from prompt injection to model poisoning and AI governance.
๐ 4 topic areas๐ 12 curated resources๐ 10 quiz questions
What you'll cover
1
LLM Security Fundamentals
Understand the attack surface of Large Language Models: prompt injection, jailbreaks, data leakage, and insecure output handling.
2
AI Red Teaming
Learn how to systematically test AI systems for safety and security failures before adversaries do.
๐๐๐
AI Red Teaming: How to Test AI Systems Security
Practical guide to AI red teaming methodology and techniques
Microsoft PyRIT โ AI Red Teaming Tool
Open-source Python Risk Identification Toolkit for generative AI
NIST AI RMF (Risk Management Framework)
NIST framework for managing risks in AI systems
3
Model & Training Security
Protect the ML pipeline from data poisoning, model stealing, and backdoor attacks.
4
AI Governance & Compliance
Navigate the EU AI Act, NIST AI RMF, and build internal governance for responsible AI deployment.
Knowledge Check
Path Summary
- Level
- Intermediate
- Estimated time
- 10 hours
- Topics
- 4
- Resources
- 12
- Quiz questions
- 10
- Passing score
- 70% (7/10)