INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
on the intersection of digital innovation, artificial intelligence, and healthcare transformation. His academic
interests include healthcare analytics, digital health ecosystems, and emerging technologies in life sciences. He
actively engages in analyzing contemporary developments in healthcare and regularly shares thought-provoking
perspectives on innovation and leadership in the healthcare sector.
REFERENCES
1. Ministry of Health and Family Welfare, Government of India. National Health Policy 2017. MoHFW:
New Delhi, 2017.
2. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for
detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410.
3. Rajpurkar P, Irvin J, Ball RL, et al. Deep learning for chest radiograph diagnosis. PLOS Medicine.
2018;15(11):e1002686.
4. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural
networks. Nature. 2017;542:115-118.
5. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care:
A call for open science. Patterns. 2021;2(10):100347.
6. Goldberger
AL,
Amaral
LAN,
Glass
L,
et
al.
PhysioBank,
PhysioToolkit,
and
PhysioNet.
Circulation. 2000;101(23):e215-e220.
7. Shih G, Wu CC, Halabi SS, et al. Augmenting the national institutes of health chest radiograph
dataset with expert annotations of possible pneumonia. Radiology: Artificial Intelligence.
2019;1(1):e180041.
8. Hripcsak G, Duke JD, Shah NH, et al. Observational Health Data Sciences and Informatics (OHDSI):
Opportunities for observational researchers. Stud Health Technol Inform. 2015;216:574-578.
9. Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of
design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689.
10.NITI Aayog. Responsible AI for All: Adopting the Framework — A Use Case Approach for All. NITI
Aayog: New Delhi, 2024.
11.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to
manage the health of populations. Science. 2019;366(6464):447-453.
12.Menon GR, Singh L, Sharma P, et al. National Burden Estimates of Healthy Life Lost in India, 2017.
Indian Journal of Medical Research. 2019;150(2):116-128.
13.Obermeyer Z, Emanuel EJ. Predicting the future — big data, machine learning, and clinical medicine.
New England Journal of Medicine. 2016;375:1216-1219.
14.Wiens J, Saria S, Sendak M, et al. Do no harm: a roadmap for responsible machine learning for
health care. Nature Medicine. 2019;25:1337-1340.
15.DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated
receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837-845.
16.Chouldechova A. Fair prediction with disparate impact: A study of bias in recidivism prediction
instruments. Big Data. 2017;5(2):153-163.
17.Mitchell M, Wu S, Zaldivar A, et al. Model Cards for Model Reporting. Proceedings of FAT 2019.
ACM: New York, 2019.
18.Gebru T, Morgenstern J, Vecchione B, et al. Datasheets for datasets. Communications of the ACM.
2021;64(12):86-92.
19.Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Advances in Neural
Information Processing Systems. 2017;30.
20.Ribeiro MT, Singh S, Guestrin C. 'Why should I trust you?' Explaining the predictions of any classifier.
Proceedings of KDD 2016. ACM: New York, 2016.
21.Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-CAM++: Generalized
gradient-based visual explanations for deep convolutional networks. WACV 2018. IEEE, 2018.
22.Kim B, Wattenberg M, Gilmer J, et al. Interpretability beyond classification: Quantitative testing with
concept activation vectors (TCAV). Proceedings of ICML 2018. PMLR, 2018.