Case Study: Empowering Corporate Governance İn Healthcare Bpos With Explainable AI İn Bangalore, India

Article Sidebar

Main Article Content

DR. Shyam Shukla.
Shreedhar Deshmukh

Abstract. Corporate governance is the system that guides how a company operates and makes decisions, ensuring transparency, fairness, and accountability in its management and leadership practices. This case study investigates how Explainable Artificial Intelligence (XAI) is influencing corporate governance practices in healthcare Business Process Outsourcing (BPO) firms based in Bangalore, India. Transparent and interpretable AI systems are increasingly viewed as tools that promote fairness, accountability, and ethical conduct in decision-making (Adadi & Berrada, 2018). XAI enables managers to understand how automated decisions are reached, supporting compliance with internal policies and external regulations (Gunning et al., 2019). By focusing on the intersection of technology and governance, explainability emerges not merely as a technical feature but as a key governance mechanism that strengthens oversight and builds organizational trust (Doshi-Velez & Kim, 2017). The rapid expansion of Artificial Intelligence within healthcare BPOs has improved efficiency but has also introduced new challenges related to transparency, ethical responsibility, and regulatory compliance (Guidotti et al., 2019). In many organizations, AI systems operate as opaque “black boxes,” offering limited insight into how outputs are generated (Samek et al., 2021). This opacity can weaken internal control mechanisms and complicate audit processes, especially in healthcare environments that depend on confidentiality, data accuracy, and ethical accountability. Explainable Artificial Intelligence addresses these challenges by making algorithmic processes more interpretable for human users. Transparent models allow administrators, compliance officers, and clients to build confidence in automated decision systems (Miller, 2019). The central research question guiding this study is: In what ways can XAI enhance corporate governance mechanisms within healthcare BPO organizations in Bangalore? The study posits that explainability serves a dual purpose—enhancing the technical reliability of AI while reinforcing governance and ethical integrity. Through an analysis of real-world applications across selected healthcare BPOs, this paper identifies how explainable systems foster accountability, ensure data integrity, and strengthen stakeholder trust in AI-enabled environments.

Case Study: Empowering Corporate Governance İn Healthcare Bpos With Explainable AI İn Bangalore, India. (2025). International Journal of Latest Technology in Engineering Management & Applied Science, 14(10), 419-425. https://doi.org/10.51583/IJLTEMAS.2025.1410000054

Downloads

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for ex-plaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247–278. https://doi.org/10.1109/JPROC.2021.3060483

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fu-sion, 58, 82–115.

Bhardwaj, P., & Ghosh, S. (2021). AI-driven automation in healthcare outsourcing: Opportunities and challenges. Jour-nal of Business Process Management, 27(3), 456–472.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for ex-plaining black box models. ACM Computing Surveys, 51(5), 1–42.

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120.

Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2022). Causability and explainability of artificial intel-ligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(2), e1452.

Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L. H., Feng, M., Ghassemi, M., & Mark, R. G. (2020). MIMIC-III, a freely accessible critical care database. Scientific Data, 3, 160035.

Kumar, V., Singh, A., & Gupta, R. (2021). Leveraging artificial intelligence for smarter healthcare BPO operations. In-ternational Journal of Health Information Systems, 17(4), 233–248.

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.

Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347–1358.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247–278.

Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A practical guide. Cham: Springer.

Article Details

How to Cite

Case Study: Empowering Corporate Governance İn Healthcare Bpos With Explainable AI İn Bangalore, India. (2025). International Journal of Latest Technology in Engineering Management & Applied Science, 14(10), 419-425. https://doi.org/10.51583/IJLTEMAS.2025.1410000054