Designing AI Systems that Support Fairness Across Distributive, Procedural, and Interactional Justice Dimensions
Article Sidebar
Main Article Content
Abstract: The need for the most fair AI systems has been overemphasized as the AI influence keeps growing and critical decisions, among others, states by the healthcare, finance, and human resources sectors, are made.
AI fairness is not only about the fair distribution of results but also it involves fair processes in which decisions are made and the features of the interactions between the AI system and users.
This article uses the concepts of organizational justice as a frame to explain the ways by which the design of an AI system could become a vehicle for: distributive justice (fair distribution of resources and results); procedural justice (decision, making process that is open and impartial); and interactional justice (communication that is respectful and empathetic). The conjunction of the three dimensions that the AI system can facilitate will make it possible for the latter to be more in line with human values and hence receive more trust, legitimacy, and acceptance from the stakeholders (Colquitt et al., 2013; Binns, 2018).
This paper also refers to the various ways which include bias mitigation techniques, algorithmic transparency, and user, centric interfaces that bring fairness into the system.
Further on, the authors explain the present continuous issues (for instance, data bias and ethical tradeoffs) and recommend future research directions for enhancing just AI systems at the end of this paper (Miller, 2017; Selbst et al., 2019).
Downloads
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
Bies, R. J., & Moag, J. S. (1986). Interactional justice: Communication criteria of fairness. In R. J. Lewicki, B. H. Sheppard, & M. H. Bazerman (Eds.), Research on negotiation in organizations (Vol. 1, pp. 43–55). JAI Press.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT), 149–159. https://doi.org/10.1145/3287560.3287583
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://doi.org/10.48550/arXiv.1702.08608
Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, T., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice. Proceedings of the 23rd International Conference on Intelligent User Interfaces (IUI '18), 211–223. https://doi.org/10.1145/3172944.3172961
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2019). The (im)possibility of fairness: Different value systems require different mechanisms for fair decision-making. Communications of the ACM, 62(4), 82–89. https://doi.org/10.1145/3287560
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330–347. https://doi.org/10.1145/230538.230561
Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2019). Predictably unequal? The effects of machine learning on credit markets. National Bureau of Economic Research. https://doi.org/10.3386/w25943
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation." AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems (NeurIPS), 29, 3315–3323. https://proceedings.neurips.cc/paper_files/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3290605.3300830
Kim, P. T. (2017). Auditing algorithms for discrimination. University of Pennsylvania Law Review Online, 166, 189–203. https://scholarship.law.upenn.edu/penn_law_review_online/vol166/iss1/12/
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807. https://doi.org/10.48550/arXiv.1609.05807
Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490. https://doi.org/10.48550/arXiv.1606.03490
Miller, T. (2017). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
Picard, R. W. (1997). Affective computing. MIT Press.
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y
Raji, I. D., Smart, A., White, R., Mitchell, M., Gebru, T., Hutchinson, B., ... & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT), 33–44. https://doi.org/10.1145/3351095.3372873
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
Verma, S., & Rubin, J. (2018). Fairness definitions explained. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), 1–7. https://doi.org/10.1145/3194770.3194776
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.pdf
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., ... & Schwartz, O. (2018). AI Now Report 2018. AI Now Institute. https://ainowinstitute.org/AI_Now_2018_Report.pdf

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in our journal are licensed under CC-BY 4.0, which permits authors to retain copyright of their work. This license allows for unrestricted use, sharing, and reproduction of the articles, provided that proper credit is given to the original authors and the source.