Bias Mitigation Techniques in AI Decision Models Trained on Social Datasets

Authors

Adaobi Okafor
Social Data Scientist, Nigeria.

Keywords:

bias mitigation, AI fairness, algorithmic justice, social datasets, discrimination in AI

Synopsis

Artificial Intelligence (AI) systems, particularly decision-making models applied to social datasets, have demonstrated significant utility in areas such as criminal justice, hiring, healthcare, and welfare allocation. However, these models often inherit and even amplify biases present in historical data, raising serious concerns about fairness, accountability, and transparency. This paper explores current and emerging bias mitigation techniques in AI decision models, especially those dealing with socially sensitive data. By evaluating pre-processing, in-processing, and post-processing methods, we provide a comprehensive view of the effectiveness, limitations, and ethical considerations associated with each approach. A particular focus is placed on the landscape of algorithmic fairness, emphasizing recent advancements and proposing future directions for research and policy.

 

 

References

(1) Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." California Law Review, vol. 104, no. 3, 2016, pp. 671–732.

(2) Bolukbasi, Tolga, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings." Advances in Neural Information Processing Systems, vol. 29, 2016.

(3) Hardt, Moritz, Eric Price, and Nati Srebro. "Equality of Opportunity in Supervised Learning." Advances in Neural Information Processing Systems, vol. 29, 2016.

(4) Kamiran, Faisal, and Toon Calders. "Data Preprocessing Techniques for Classification Without Discrimination." Knowledge and Information Systems, vol. 33, no. 1, 2012, pp. 1–33.

(5) Sirimalla A. Autonomous Performance Tuning Framework for Databases Using Python and Machine Learning. J Artif Intell Mach Learn & Data Sci 2023 1(4), 3139-3147. DOI: doi.org/10.51219/JAIMLD/adithya-sirimalla/642

(6) Zhang, Brian Hu, Blake Lemoine, and Margaret Mitchell. "Mitigating Unwanted Biases with Adversarial Learning." Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2018, pp. 335–340.

(7) Feldman, Michael, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. "Certifying and Removing Disparate Impact." Proceedings of the 21th ACM SIGKDD

(8) International Conference on Knowledge Discovery and Data Mining, 2015, pp. 259–268.

(9) Raji, Inioluwa Deborah, and Joy Buolamwini. "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2019, pp. 429–435.

(10) Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. "Fairness Through Awareness." Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 2012, pp. 214–226.

(11) Binns, Reuben. "Fairness in Machine Learning: Lessons from Political Philosophy." Proceedings of the 2018 Conference on Fairness, Accountability and Transparency (FAT), 2018, pp. 149–159.

(12) Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. "A Survey on Bias and Fairness in Machine Learning." ACM Computing Surveys, vol. 54, no. 6, 2021, article 115.

(13) Corbett-Davies, Sam, and Sharad Goel. "The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning." Communications of the ACM, vol. 63, no. 9, 2020, pp. 139–143.

(14) Sirimalla, A. (2022). End-to-end automation for cross-database DevOps deployments: CI/CD pipelines, schema drift detection, and performance regression testing in the cloud. World Journal of Advanced Research and Reviews, 14(3), 871–889. https://doi.org/10.30574/wjarr.2022.14.3.0555.

(15) Binns, Reuben, Michael Veale, Max Van Kleek, and Nigel Shadbolt. "‘It's Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, article 377.

(16) Pleiss, Geoff, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. "On Fairness and Calibration." Advances in Neural Information Processing Systems, vol. 30, 2017.

IJAIRD

Published

July 10, 2025