INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 419
Case Study: Empowering Corporate Governance İn Healthcare
Bpos With Explainable AI İn Bangalore, India
*DR. Shyam Shukla., Shreedhar Deshmukh
NSB Academy, Bangalore, India
*Corresponding Author
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1410000054
Abstract. Corporate governance is the system that guides how a company operates and makes decisions, ensuring transparency,
fairness, and accountability in its management and leadership practices. This case study investigates how Explainable Artificial
Intelligence (XAI) is influencing corporate governance practices in healthcare Business Process Outsourcing (BPO) firms based
in Bangalore, India. Transparent and interpretable AI systems are increasingly viewed as tools that promote fairness,
accountability, and ethical conduct in decision-making (Adadi & Berrada, 2018). XAI enables managers to understand how
automated decisions are reached, supporting compliance with internal policies and external regulations (Gunning et al., 2019). By
focusing on the intersection of technology and governance, explainability emerges not merely as a technical feature but as a key
governance mechanism that strengthens oversight and builds organizational trust (Doshi-Velez & Kim, 2017). The rapid
expansion of Artificial Intelligence within healthcare BPOs has improved efficiency but has also introduced new challenges
related to transparency, ethical responsibility, and regulatory compliance (Guidotti et al., 2019). In many organizations, AI
systems operate as opaque “black boxes,” offering limited insight into how outputs are generated (Samek et al., 2021). This
opacity can weaken internal control mechanisms and complicate audit processes, especially in healthcare environments that
depend on confidentiality, data accuracy, and ethical accountability. Explainable Artificial Intelligence addresses these challenges
by making algorithmic processes more interpretable for human users. Transparent models allow administrators, compliance
officers, and clients to build confidence in automated decision systems (Miller, 2019). The central research question guiding this
study is: In what ways can XAI enhance corporate governance mechanisms within healthcare BPO organizations in Bangalore?
The study posits that explainability serves a dual purposeenhancing the technical reliability of AI while reinforcing governance
and ethical integrity. Through an analysis of real-world applications across selected healthcare BPOs, this paper identifies how
explainable systems foster accountability, ensure data integrity, and strengthen stakeholder trust in AI-enabled environments.
Keywords: Explainable AI, interpretability, transparency, corporate governance, accountability
I. Introduction
Background and Context
Strengthening corporate governance is essential for healthcare BPOs operating in a highly regulated and data-sensitive environ-
ment. This case study explores how Explainable Artificial Intelligence (XAI) is enhancing transparency, accountability, and ethi-
cal decision-making in Bangalore’s healthcare outsourcing sector. The analysis highlights practical XAI applications that support
better oversight, risk mitigation, and stakeholder trust. It also emphasizes how technology-enabled governance is shaping sustain-
able performance and compliance readiness in healthcare BPO organizations. The rapid expansion of Artificial Intelligence (AI)
and Machine Learning (ML) has transformed many industries, with healthcare standing out as one of the primary beneficiaries of
these advancements. In particular, healthcare Business Process Outsourcing (BPO) organizations handle an overwhelming vol-
ume of data, ranging from handwritten physician notes and diagnostic test results to radiology reports and insurance claim forms.
Managing such diverse records through traditional manual processes often results in inefficiencies, errors, and delays that can
hinder both patient care and timely insurance adjudication (Kumar et al., 2021).
To address these challenges, AI and ML solutions have been introduced to streamline workflows through automated document
classification, natural language processing (NLP), and intelligent data extraction. While these technologies offer immense poten-
tial, one persistent concern is the lack of transparency in decision-making. Given the sensitive and regulated nature of healthcare
data, it is not enough for AI systems to be accuratethey must also be explainable, fostering trust, ensuring compliance, and
maintaining accountability (Adadi & Berrada, 2018).
Explainable AI (XAI) plays a critical role in this context by making machine decisions understandable to stakeholders. For
healthcare BPOs, where clinicians, insurers, auditors, and patients rely on accurate and justifiable outputs, explainability is not
merely desirable but essential. By shedding light on why a particular classification or prediction was made, XAI strengthens trust,
minimizes risks, and aligns technological efficiency with ethical and regulatory standards.
Purpose and Scope of the Study
This research investigates how AI and XAI can reshape healthcare BPO operations through five in-depth case studies. The focus
lies on understanding how explainability improves classification accuracy, reduces operational and legal risks, supports compli-
ance, and enhances stakeholder trust. In doing so, the study highlights the need to balance automation efficiency with interpreta-
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 420
bility, ensuring that AI systems act not as opaque "black boxes" but as transparent and accountable partners in healthcare opera-
tions (Ribeiro, Singh, & Guestrin, 2016).
The study covers both technical and operational dimensions, analyzing AI methodologies such as NLP-driven classification and
semi-supervised learning, alongside XAI approaches like Local Interpretable Model-Agnostic Explanations (LIME). Beyond the
technical perspective, the research also explores the ethical, legal, and organizational implications of implementing XAI in
healthcare BPOs.
Structure of the Paper
The paper is organized as follows: Section 2 reviews existing literature on AI in healthcare document processing, XAI techniques,
and ethical and regulatory considerations. Section 3 outlines the research methodology and data collection strategy. Section 4 in-
troduces a conceptual model for integrating XAI into healthcare BPO operations. Sections 5 to 9 present five case studies, each
demonstrating real-world applications of XAI in document processing, compliance monitoring, medical coding, and predictive
workforce analytics. Finally, Section 10 synthesizes findings, discusses contributions, and provides recommendations for practice
and future research.
II. Literature Review
AI and Machine Learning in Healthcare Document Processing
Healthcare BPOs traditionally relied on manual document handling, which was time-intensive and error-prone. Recent advances
in AI and ML have revolutionized this domain by enabling automated classification and extraction of valuable insights from un-
structured medical data. Techniques such as Term FrequencyInverse Document Frequency (TF-IDF) and word embeddings al-
low textual data to be represented in a form that machine learning models can use for classification tasks such as distinguishing
between prescriptions, lab reports, and discharge summaries (Johnson et al., 2020).
Despite these improvements, existing AI systems face limitations. Models trained on institution-specific datasets may not general-
ize well across diverse healthcare environments, while subtle contextual differences in clinical documents can lead to misclassifi-
cations. Additionally, the increasing prevalence of scanned documents embedded in electronic health records adds noise and
complexity, requiring more sophisticated preprocessing and learning techniques (Rajkomar et al., 2019).
Table 1 : Differences between the traditional approach and AI-based approach.
Traditional Approach
AI-based approach
Employees manually enter data, verify patient details,
and process claims, leading to high error rates and
slow processing.
Optical Character Recognition (OCR) and Natural Language
Processing (NLP) can automatically extract and input data,
reducing manual effort and improving efficiency. AI-
powered tools can scan medical documents, extract relevant
information, and enter it into databases within seconds.
Faster & More Accurate Claims Processing
Claims verification involves manual checking of pa-
tient history, insurance details, and treatment records,
leading to delays and potential denials due to errors.
AI-powered Machine Learning (ML) algorithms analyse
claims data in real-time, flagging inconsistencies and auto-
matically approving or escalating claims.
Predictive Analytics for Better Decision-Making
BPO companies rely on historical data and human
judgment to assess risks and detect anomalies.
With AI-powered Predictive analytics can forecast claim de-
nials, identify fraud risks, and optimize workforce allocation.
Enhanced Compliance & Reduced Errors
Compliance with HIPAA, GDPR, and other regula-
tions requires manual audits and reporting, which are
time-consuming and prone to errors.
With AI-powered Automated compliance monitoring ensures
data security, flags regulatory violations, and maintains audit
trails for transparency.
Although high precision and recall rates are frequently reported, many of these models fail to provide reasoning behind their out-
puts. This lack of explainability hinders trust, particularly in settings where accountability and verification are paramount.
Explainable AI (XAI) in Healthcare
AI’s “black-box” nature is a significant obstacle in healthcare applications. Explainable AI seeks to resolve this by offering inter-
pretability and transparency in decision-making processes. Model-agnostic techniques, such as LIME, have gained traction be-
cause they can provide local explanations for individual predictions, highlighting the specific words or features that influenced
outcomes (Ribeiro et al., 2016).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 421
In healthcare BPO operations, such transparency allows clinicians, auditors, and compliance officers to cross-check AI outputs,
ensuring accuracy and identifying potential biases. Emerging discussions of "XAI 2.0" emphasize moving beyond simple inter-
pretability toward more nuanced explanations that can handle complex temporal and contextual medical data (Holzinger et al.,
2022).
By embedding transparency, XAI helps organizations meet regulatory demands and fosters stakeholder trust, making it indispen-
sable for responsible AI adoption in healthcare.
Healthcare BPO and AI Integration
Healthcare BPOs manage a wide range of tasks, from claims adjudication and patient record management to medical coding and
compliance monitoring. These tasks are traditionally manual, repetitive, and vulnerable to errors. AI-driven automation now of-
fers significant improvements in efficiency, accuracy, and scalability. Techniques such as Optical Character Recognition (OCR)
and NLP enable data extraction and classification with minimal human intervention, while machine learning models provide pre-
dictive analytics for fraud detection and claim denials (Bhardwaj & Ghosh, 2021).
Graph1 showing results of feedback of AI software used for Medical coding automation
Graph2 showing results of feedback of AI software used for Quality Assurance and audit trail generation
Beyond efficiency, AI integration allows BPOs to reallocate human resources toward higher-value activities such as quality as-
surance and exception handling, improving client satisfaction. Importantly, these digital transformations align with both business
goals and regulatory obligations, underscoring AI’s strategic importance in healthcare BPO growth.
Ethical and Regulatory Considerations
Healthcare data is among the most sensitive information to process, and AI-driven systems must comply with strict regulatory
frameworks such as the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regula-
tion (GDPR). Concerns about data privacy, security breaches, and algorithmic bias remain critical (Voigt & Von dem Bussche,
2017).
Bias within AI models poses particular risks, as skewed data could lead to unfair outcomes in claims processing or patient catego-
rization. XAI mitigates these risks by making decision factors transparent, enabling stakeholders to identify and correct potential
biases. Furthermore, audit trails generated through explainable AI facilitate accountability, providing a safeguard against legal
and compliance failures (Arrieta et al., 2020).
0
10
20
30
40
50
60
better Extremely Useful Good to use Not Useful
Quality Assurance and audit trail generation
Total
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 422
Research Gap and Rationale
Much of the existing research on AI in healthcare emphasizes diagnostic tools or clinical predictive analytics, with relatively little
focus on the operational challenges faced by healthcare BPOs. Where XAI is discussed, it is often framed at a theoretical level
without sufficient case-based analysis. Consequently, there remains a knowledge gap concerning how explainability influences
trust, compliance, and business outcomes in healthcare BPO operations (Holzinger et al., 2022).
This study addresses that gap by presenting multiple empirical case studies that explore the role of XAI in document classifica-
tion, compliance monitoring, medical coding, and workforce optimization. In doing so, it highlights the dual importance of tech-
nical performance and interpretability for healthcare BPO transformation.
III. Research Methodology
Research Design and Approach
This study adopts a qualitative multiple case study design to examine how Explainable AI (XAI) is integrated into healthcare
BPO operations. A case study approach is particularly suitable for understanding complex, real-world phenomena that cannot be
fully captured through purely quantitative techniques (Yin, 2018). By analyzing five distinct cases, this research explores varia-
tions in healthcare workflows, document types, and AI implementations, thus ensuring depth and breadth in findings.
Triangulation was applied through the use of multiple data sources: academic literature, operational documents, system reports,
and interviews with key stakeholders. This mix of theoretical and practical inputs supports both reliability and validity of the re-
search outcomes.
Data Collection
Primary data collection involved semi-structured interviews with BPO managers, AI developers, and medical coders. These inter-
views provided insights into system usability, integration challenges, and perceptions of explainability. Focus groups further en-
riched the dataset by encouraging participants to reflect collectively on implementation outcomes.
Secondary data included AI system outputs, accuracy reports, audit logs, and compliance records. Such artifacts offered objective
measures of performance and explainability, complementing the qualitative insights gathered from participants.
Data Analysis
Qualitative data were analyzed using thematic coding to identify recurring patterns, including trust, compliance, usability, and
operational efficiency. Quantitative data such as accuracy scores, error rates, and processing times were used to benchmark sys-
tem performance. XAI outputs generated by tools like LIME were evaluated based on clarity and usefulness to stakeholders, es-
pecially in decision verification. This mixed-methods approach allowed for nuanced interpretation and cross-validation of find-
ings.
Conceptual Model Development
Components of the XAI-Driven Healthcare BPO Model
The conceptual model developed in this study illustrates how XAI can be embedded into healthcare BPO operations. The work-
flow begins with raw medical documents, often in PDF or scanned formats, which are preprocessed to extract clean text. Machine
learning algorithms, particularly NLP-based classifiers, then categorize the documents into clinically or administratively relevant
groups.
What distinguishes this model is the explainability layer. Here, XAI methods generate explanations alongside classifications,
highlighting the specific terms or features that influenced a decision. The outputs therefore include both the document category
and a justification for why that classification was made.
Integration of Explainability
The integration of XAI ensures that every automated decision is accompanied by a rationale understandable to human reviewers.
Using LIME, for example, stakeholders can view highlighted sections of a document that influenced the system’s categorization.
These explanations support iterative feedback loops where errors are identified, corrected, and used to retrain models for continu-
ous improvement.
Business and Operational Impact
Embedding explainability into AI-driven workflows yields several advantages: reduced manual workload, faster processing times,
fewer classification errors, and improved regulatory readiness. Equally important, explainability fosters trust among clients, regu-
lators, and internal teams by making AI outputs auditable and transparent (Holzinger et al., 2022).
Case Study 1: AI-based Document Classification for Insurance Claims
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 423
Case Background
The first case involves a healthcare BPO handling over 20,000 pages of medical records for insurance claim adjudication. Manual
processing was inefficient, error-prone, and costly. The organization adopted AI-based classification tools to automate sorting
while incorporating explainability for regulatory and client trust.
Implementation and Results
The AI solution used NLP preprocessing (TF-IDF) and supervised classifiers trained on labeled data. Performance metrics indi-
cated high accuracy across multiple document types. The integration of LIME explanations enabled auditors and clinicians to ver-
ify AI outputs, reducing concerns about errors and liability.
Lessons Learned
The system reduced turnaround times and manual labor requirements significantly. More importantly, explainability was critical
in enhancing trust and ensuring compliance. Stakeholders reported increased confidence in AI-assisted processes, though chal-
lenges in handling heterogeneous data remained.
Case Study 2: Semi-Supervised Learning for Multi-Institutional Data
Context
Healthcare BPOs often process documents from multiple hospitals with varied formats, vocabularies, and layouts. Purely super-
vised models trained on one dataset typically perform poorly when applied to another.
Methodology
A semi-supervised learning approach using Laplacian Support Vector Machines (SVM) was applied. This framework utilized
both labeled and unlabeled data, improving adaptability to diverse institutional records.
Outcomes
Compared to traditional supervised models, the semi-supervised approach delivered higher accuracy and greater scalability across
multiple datasets. Preprocessing pipelines for noisy scanned documents further enhanced model robustness. The case demon-
strates how semi-supervised learning can address real-world heterogeneity in healthcare BPO operations.
Case Study 3: XAI for Compliance and Audit Trails
Organizational Challenge
Healthcare BPO firms must demonstrate compliance with regulations such as HIPAA and GDPR. AI adoption introduces risks if
decision-making processes are opaque, as auditors and regulators require accountability.
Implementation
The organization implemented XAI-enabled monitoring systems that generated transparent audit trails. Explanations clarified
why a specific claim was flagged for review or escalation, reducing ambiguity in compliance workflows.
Impact
Audit readiness improved significantly, with faster issue resolution and fewer compliance violations. Stakeholder trust increased,
as regulators and clients could clearly trace AI decisions. This case underscores the importance of explainability in managing le-
gal and regulatory risks.
Case Study 4: Enhancing Medical Coding with XAI
Context
Medical coding is a cornerstone of insurance claims and reimbursement, yet manual coding is prone to errors that can lead to rev-
enue losses.
AI Techniques
An AI-driven coding system was implemented using NLP and classification models trained on large coding datasets. XAI meth-
ods explained why specific codes were assigned, highlighting influential terms or document sections.
Results
The system reduced errors and improved processing speed. Coders appreciated the transparency, as explanations enabled quick
verification and correction where necessary. This led to smoother claims processing and improved revenue recovery.
Case Study 5: Predictive Analytics and Workforce Optimization
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 424
Background
Fraud detection and workforce allocation remain persistent challenges in claims processing. Manual reviews often miss fraudu-
lent patterns or misallocate staff resources.
AI and XAI Application
Real-time predictive models analyzed claims data to detect anomalies. XAI components provided reasoning behind flagged risks,
supporting human decision-making.
Outcomes
The system improved fraud detection rates while minimizing false positives. Workforce allocation became more efficient, with
human resources deployed strategically. Clients reported higher satisfaction due to improved accuracy and faster turnaround
times.
IV. Conclusion and Recommendations.
The five case studies collectively demonstrate that Explainable Artificial Intelligence (XAI) plays a transformative role in
strengthening healthcare BPO operations by improving decision accuracy, operational efficiency, regulatory compliance, and or-
ganizational trust. By making AI processes more transparent and interpretable, XAI converts automated systems from opaque
“black boxes” into accountable and trustworthy decision-support tools (Adadi & Berrada, 2018). This transformation enhances
not only operational performance but also ethical and governance standards within healthcare organizations (Guidotti et al.,
2019). The study contributes to the theoretical discourse by presenting a conceptual framework for integrating XAI into
healthcare BPO workflows, aligning technological innovation with principles of accountability and corporate governance (Doshi-
Velez & Kim, 2017). Practically, it offers actionable guidance for organizations aiming to balance automation efficiency with
transparency and compliance. Although the findings provide valuable insights, the study is limited by its case-specific focus and
the continuously evolving regulatory context surrounding AI use in healthcare (Samek et al., 2021). Future research should ex-
plore larger datasets and adopt longitudinal designs to refine existing XAI frameworks while examining their integration with
emerging technologies such as federated learning and adaptive ethical AI systems (Gunning et al., 2019). To ensure long-term
sustainability and credibility, healthcare BPO organizations should prioritize investments in explainable AI infrastructures and
encourage cross-sector collaboration among developers, healthcare professionals, and policymakers. Such partnerships can foster
systems that are not only efficient and innovative but also transparent, fair, and ethically grounded, strengthening both govern-
ance and trust in an increasingly data-driven healthcare ecosystem.
References
1. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).
IEEE Access, 6, 5213852160. https://doi.org/10.1109/ACCESS.2018.2870052
2. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint
arXiv:1702.08608.
3. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for ex-
plaining black box models. ACM Computing Surveys, 51(5), 142. https://doi.org/10.1145/3236009
4. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAIExplainable artificial intelligence.
Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
5. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1
38. https://doi.org/10.1016/j.artint.2018.07.007
6. Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. (2021). Explaining deep neural networks and
beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247278.
https://doi.org/10.1109/JPROC.2021.3060483
7. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).
IEEE Access, 6, 5213852160.
8. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., & Herrera, F. (2020). Explainable
artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fu-
sion, 58, 82115.
9. Bhardwaj, P., & Ghosh, S. (2021). AI-driven automation in healthcare outsourcing: Opportunities and challenges. Jour-
nal of Business Process Management, 27(3), 456472.
10. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint
arXiv:1702.08608.
11. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for ex-
plaining black box models. ACM Computing Surveys, 51(5), 142.
12. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAIExplainable artificial intelligence.
Science Robotics, 4(37), eaay7120.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 425
13. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2022). Causability and explainability of artificial intel-
ligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(2), e1452.
14. Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L. H., Feng, M., Ghassemi, M., & Mark, R. G. (2020). MIMIC-III, a
freely accessible critical care database. Scientific Data, 3, 160035.
15. Kumar, V., Singh, A., & Gupta, R. (2021). Leveraging artificial intelligence for smarter healthcare BPO operations. In-
ternational Journal of Health Information Systems, 17(4), 233248.
16. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1
38.
17. Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14),
13471358.
18. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier.
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135
1144.
19. Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. (2021). Explaining deep neural networks and
beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247278.
20. Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A practical guide.
Cham: Springer.