Detecting Algorithmic Bias in AI-Powered Predictive Educational Tools
Program: Data Science Master's Degree
Location: Not Specified (remote)
Student: Taylor Molzahn
My project focuses on detecting and mitigating algorithmic bias within AI-powered predictive educational tools. Drawing on a Kaggle dataset of student performance in multiple subjects, I examine how demographic factors, such as gender, part-time job status, and extracurricular involvement, may influence predictive models, shaping student outcomes and opportunities.
The primary objective is to identify biases, measure their impact, and apply targeted interventions to improve fairness in the models used. Through an exploratory case study, I implement and compare machine learning techniques (random forest, logistic regression, and gradient boosting) to assess their predictive accuracy and fairness. I also employ statistical tests and bias mitigation methods like SMOTE and propensity scoring to address data imbalances and reduce prediction disparities.
My findings highlight persistent performance gaps, with noticeable discrepancies for specific demographic groups, including differences in STEM-related achievement and extracurricular involvement. I work toward more equitable predictive outcomes by refining models and integrating fairness-aware methods during data processing and modeling phases.
This research contributes to a broader dialogue on ethical AI use in education and beyond. It sheds light on the challenges of achieving equitable machine learning predictions and offers practical strategies for mitigating bias. Ultimately, these insights guide practitioners, policymakers, and developers toward more responsible, transparent, and just AI-driven educational tools.