Personal Data Science Projects
-
Updated
Feb 15, 2023 - Jupyter Notebook
Personal Data Science Projects
Demos about Teaching your Models to play fair with FairLearn
Assess fairness of machine learning models and choose an appropriate fairness metric for your use case with Fairlearn
Learn different techniques for mitigating fairness-related harms using Fairlearn.
An end-to-end MLOps pipeline for a production-grade fraud detection model. This project demonstrates best practices including data versioning (DVC), experiment tracking (MLflow), CI/CD (GitHub Actions), containerization (Docker), deployment on GKE, and advanced model analysis (poisoning attacks, drift, fairness, explainability).
Drop-in encrypted Fairlearn metrics over CKKS. Same API surface; ciphertext arithmetic via TenSEAL or OpenFHE.
Microsoft Ignite - Getting started on your health-tech journey using responsible AI
Demo's of FairLearn and InterpretML as described in my article on responsible AI.
AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.
An ethically-aware deep learning project to predict credit card offer acceptance while mitigating income-based bias using SHAP, Fairlearn, and AIF360.
A platform developed with Cash App to help ML engineers detect and visualize biases in models using Fairlearn. Features include a collaborative and interactive dashboard (React, Chart.js), a Flask backend, and a secure MySQL database for data storage and analysis.
AI-powered bias auditing tool that detects demographic disparities in datasets and explains findings in plain English using Gemini.
🔬 Drop in any ML model → get SHAP explainability, fairness audit & drift detection in seconds
Reoffending-risk prediction with a Neo4j knowledge graph and a Fairlearn audit on the COMPAS dataset. A small auditable architecture demonstration.
This repository was used for my thesis. The goal was to find a biased dataset, and mitigate its bias. That is done under the patients directory. Check the README file for more.
A comprehensive bias analysis on bank loan data, examining potential unfairness in credit quality predictions across age groups
Practical, implementation-ready AI governance framework aligned to NIST AI RMF — automated risk scoring, data lineage validation, bias detection, model cards, and a governance dashboard. Built for enterprise architects deploying AI in regulated environments.
Bias analysis & mitigation for credit scoring under EU AI Act. Follows Fraunhofer KI-Prüfkatalog (FN). Python · Fairlearn · SHAP · Financial Services AI Governance.
Student Success Model (SSM)
Add a description, image, and links to the fairlearn topic page so that developers can more easily learn about it.
To associate your repository with the fairlearn topic, visit your repo's landing page and select "manage topics."