I’m a CS student at the University of Guelph interested in building AI systems and understanding how they fail, especially in security, privacy, and adversarial settings.
- GenAI Privacy Audit: Membership inference attacks on GAN discriminators and differential privacy as a defense.
- LLM Redteam Lab: Automated LLM red-teaming system for prompt injection and guardrail testing.
- Federated Poison Simulator: Simulation exploring poisoning attacks against federated learning aggregation.
- NPM Scanner: CLI vulnerability scanner for
package.jsonusing the OSV database.
I also publish open-source CS notes in Markdown to make technical topics easier for fellow students to learn:
Format: Markdown with LaTeX math · Best viewed in Obsidian or GitHub
