Learning Outcomes:
On successful completion of this module the learner will be able to:
1. understand foundation concepts of AI.
2. Apply AI methods to gather, analyse, and interpret digital evidence in investigative scenarios.
3. Investigate AI systems (including Generative AI), identifying biases, verifying system integrity, and explaining decision processes using modern XAI (Explainable AI) techinques.
4. Evaluate legal, ethical, and regulatory requirements governing AI usage in investigations.
Indicative Module Content:
1. AI for Investigations
– Data acquisition/preprocessing from heterogeneous sources
– Automated evidence extraction (images, documents, logs)
– ML and GenAI techniques in forensic contexts
– NLP for eDiscovery, social media intelligence (OSINT)
2. Investigating AI Models
– Overview of ML architectures (including Transformers for Generative AI) – Bias and fairness auditing frameworks (metrics, data imbalance) – Adversarial attacks on AI systems (data poisoning, model inversion) – Explainable AI (XAI) techniques (LIME, SHAP, etc.)
3. GenAI-Specific Investigations
– Large language models (GPT-style), diffusion models (e.g., Stable Diffusion)
– Detecting harmful outputs (disinformation, deepfakes)
– Watermarking, content tracing, and forensic signatures in GenAI outputs
– Ethical and regulatory constraints (proposed AI Acts, data protection, content moderation)
4. Case Studies & Tools
– Fraud detection, insider threat analysis, corporate compliance – AI-driven policing or intelligence gathering
– OSINT (Open-Source Intelligence) tools and specialized forensic platforms