Learning Outcomes:
On Successful completion of this module the learner will be able to:
1. Explain foundational AI and cybersecurity concepts relevant to AI pipelines.
2. Identify and exploit vulnerabilities in AI models using recognised adversarial attack methods.
3. Devise and implement robust defenses (secure training, monitoring, adversarial mitigation) within modern practices.
4. Integrate governance, ethics, and compliance considerations (e.g., EU AI Act, bias) into AI security strategies.
5. Evaluate new threats and propose forward-looking solutions to secure AI systems against future attack trends.
Indicative Module Content:
1. AI Security Landscape
- AI Foundations - Architectures, Pipelines, and Machine Learning Frameworks
- Cybersecurity Essentials for AI - Attack Surfaces & Defensive Principles
2. Offensive Security
- Threat Modelling & Risk Assessment for AI
- Adversarial Attacks (CV & NLP)
- Generative AI Exploits & Prompt Injection
- Generative Agent Security Fundamentals
- Data Poisoning & Model Backdoors
- Privacy Attacks & Inference Risks
3. Defense
- Robust Training & Adversarial Defenses
- AI Security Policy Design & Implementation
- Monitoring, Logging, & Anomaly Detection
- Governance, Compliance & Ethical AI
4. Operational Best Practices & Emerging Topics
- Incident Response & AI Forensics
- Case Studies & Industry Insights
- Future Trends