INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 122
Ethical AI Frameworks for Responsible Internal Auditing Practices:
A Conceptual and Theoretical Framework
Dr. Mintu Gogoi
Gargaon College, Simaluguri, Sivasagar (Assam)
DOI : https://doi.org/10.51583/IJLTEMAS.2025.1412000013
Received: 13 December 2025; Accepted: 19 December 2025; Published: 26 December 2025
ABSTRACT
The accelerating integration of Artificial Intelligence (AI) into internal auditing is reshaping assurance practices
through continuous auditing, advanced analytics, and predictive risk assessment. While AI enhances audit
efficiency, coverage, and timeliness, it also introduces significant ethical, accountability, and governance
challenges, including algorithmic opacity, data bias, privacy risks, and the potential erosion of professional
judgment. Addressing these issues is essential for preserving audit integrity and stakeholder trust in increasingly
automated audit environments. This study adopts a conceptual and theory-driven approach, drawing on
Stakeholder Theory, Ethical Decision-Making Theory, and Technology Governance Theory. Through a
systematic synthesis of academic literature, professional auditing standards, and global ethical AI guidelines, the
paper develops the Ethical AI Audit Framework (EAAF). The framework embeds core ethical principles
transparency, fairness, accountability, explainability, privacy, and integrityacross the internal audit lifecycle,
from planning to follow-up. The EAAF emphasizes “human-in-command” oversight, highlights governance
enablers such as ethical review mechanisms and bias audits, and identifies auditor competence as central to
ethical AI outcomes. This study contributes a domain-specific ethical governance model to support responsible,
transparent, and trustworthy AI-enabled internal auditing.
Keywords: Artificial Intelligence, Internal Auditing, Ethics, Accountability, Governance Framework
INTRODUCTION
The accelerating digital revolution has significantly reshaped both the operational and strategic dimensions of
internal auditing. Artificial Intelligence (AI)including machine learning (ML), natural language processing
(NLP), and predictive analyticshas transformed how auditors collect, analyze, and interpret evidence.
Contemporary internal audit functions increasingly rely on intelligent systems capable of detecting anomalies,
assessing internal controls, and evaluating risks in real time (Brown-Liburd, Issa, & Lombardi, 2021). As
organizations integrate AI-driven tools to enhance assurance quality, speed, and precision, concerns regarding
ethical responsibility, transparency, and accountability have become more pronounced (Kokina & Davenport,
2017).
According to the Institute of Internal Auditors (IIA, 2023), internal auditing is “an independent, objective
assurance and consulting activity designed to add value and improve an organization’s operations.” This
definition underscores ethical principles such as integrity, objectivity, and due professional careprinciples that
may be challenged when audit judgments are supported or substituted by algorithmic decision-making.
Consequently, the ethical integration of AI is not solely a technological requirement but a fundamental
governance and moral obligation.
Although research on AI in auditing has expanded rapidly, a clear conceptual structure for embedding ethical
principles throughout the AI-enabled audit lifecycle remains lacking. Existing studies have largely focused on
efficiency and automation while neglecting ethical risks such as biased datasets, opaque algorithmic logic, and
ambiguous accountability for AI-generated audit outcomes (Luo et al., 2022). As organizations transition toward
intelligent auditing ecosystems, the absence of a structured ethical framework may undermine audit credibility,
stakeholder trust, and adherence to professional standards.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 123
To address this gap, this paper proposes the Ethical AI Audit Framework (EAAF), a theoretical and ethical
foundation designed to guide auditors, organizations, and regulators in adopting responsible and transparent AI
across all phases of the internal audit process.
Objective of the Study:
The objectives framed for the present study are
1. To critically examine the ethical, professional, and governance challenges of AI adoption in internal
auditing.
2. To develop a theoretically grounded Ethical AI Audit Framework (EAAF) by integrating Stakeholder,
Ethical Decision-Making, and Technology Governance theories across the internal audit lifecycle.
3. To propose conceptual propositions and a future research agenda to support empirical validation,
standard-setting, and policy development for responsible AI-enabled internal auditing.
RESEARCH METHODOLOGY:
This study employs a conceptual, theory-driven, non-empirical methodology to explore ethical AI adoption in
internal auditing. Given the emergent nature of the topic and the limited availability of unified models, the
research adopts a qualitative synthesis design to enable theory building.
A systematic and integrative literature review forms the empirical foundation, encompassing peer-reviewed
journals in accounting, auditing, information systems, and AI ethics, alongside professional and policy
documents from the IIA, IFAC, OECD, and the European Commission. This triangulated evidence base ensures
analytical rigor and external validity.
Data were analysed through iterative abstraction and conceptual mapping, aligning ethical AI principles with
audit lifecycle processes, governance enablers, and auditor competencies. Theoretical coherence was achieved
through the integration of Stakeholder, Ethical Decision-Making, and Technology Governance theories, forming
the foundation for constructing the Ethical AI Audit Framework (EAAF).
REVIEW OF LITERATURE
AI and Internal Auditing
The integration of AI into internal auditing represents a paradigm shift in assurance and governance. AI
technologies enable auditors to process vast amounts of data, identify patterns, and detect anomalies far beyond
human capacity (Yoon, Hoogduin, & Zhang, 2015). Applications range from predictive risk assessments and
automated control testing to natural languagebased audit documentation and fraud detection. Research indicates
that AI-driven audit processes significantly enhance efficiency, reduce sampling errors, and improve coverage
of complex transactions (Issa, Sun, & Vasarhelyi, 2016).
However, as AI becomes embedded in audit practices, auditors face challenges in understanding algorithmic
behaviour and ensuring that system-generated outputs align with ethical and professional standards. The black-
box nature of many machine learning models complicates the auditor’s ability to interpret, validate, and explain
audit evidence (Lombardi, Bloch, & Vasarhelyi, 2022). This opacity raises ethical concerns regarding
accountability for errors or bias arising from automated systems.
Moreover, the IIA’s (2023) competency frameworks emphasize that auditors must exercise due professional care
and maintain independence. When audit judgments are derived from AI tools developed or maintained by third
parties, ensuring independence and integrity becomes more complex. Hence, scholars argue for frameworks that
ensure explain ability, human oversight, and algorithmic governance in audit environments (Davenport & Kirby,
2021).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 124
Ethical Dimensions of AI in Auditing
The ethical implications of AI in auditing extend beyond technical reliability to encompass fairness, privacy,
transparency, and accountability. Algorithmic decisions, especially in areas such as fraud detection or control
evaluation, can inadvertently introduce bias if training data are unrepresentative or reflect historical inequities
(Mittelstadt et al., 2016). Ethical AI, therefore, requires proactive governance mechanisms to identify and
mitigate bias at every stage of the audit process.
Auditing ethics traditionally rest on the principles of integrity, objectivity, confidentiality, and professional
behaviour (IFAC, 2022). Integrating AI into audit functions demands reinterpretation of these principles in
technological contexts. For instance, confidentiality must now account for data security and privacy risks
inherent in AI-driven analytics. Similarly, objectivity requires auditors to critically evaluate algorithmic
recommendations rather than accepting them unquestioningly (Luo et al., 2022).
Ethical AI frameworks, such as those proposed by the European Commission (2019) and the OECD (2021),
highlight key principlestransparency, accountability, fairness, and human oversightthat align closely with
professional auditing ethics. Yet, their adaptation to the internal auditing context remains limited. This paper
aims to contextualize these global ethical standards into a domain-specific framework for internal auditing.
Existing Ethical AI Frameworks and Professional Standards
Global institutions have developed foundational ethical AI guidelines. The European Union’s Ethics Guidelines
for Trustworthy AI (2019) outline seven key requirements: human agency, technical robustness, privacy,
transparency, diversity, societal well-being, and accountability. The OECD AI Principles (2021) similarly
emphasize fairness, interpretability, and governance mechanisms to ensure ethical outcomes.
In the auditing field, professional bodies like ICAEW (2021) and IFAC (2022) have begun addressing digital
ethics through guidance on data analytics, cybersecurity, and AI adoption. However, their recommendations
remain fragmented and largely descriptive rather than integrative.
Few academic models explicitly connect these ethical principles to the internal audit lifecycleplanning,
evidence collection, evaluation, reporting, and follow-up. This gap underpins the need for a holistic conceptual
modelone that not only embeds ethical AI principles within audit processes but also aligns them with
governance, professional, and societal expectations.
Theoretical Foundations for Ethical AI in Internal Auditing:
Artificial Intelligence (AI) challenges the moral foundations of traditional auditing by redistributing judgment
from human auditors to algorithms. To ensure that AI adoption in internal auditing remains ethically aligned and
professionally accountable, a robust theoretical grounding is required. This section anchors the proposed Ethical
AI Audit Framework (EAAF) in three complementary perspectivesStakeholder Theory, Ethical Decision-
Making Theory, and Technology Governance Theoryeach providing a distinct but interlinked lens for
interpreting responsible AI use in audit practice.
Stakeholder Theory: Embedding Ethical Accountability in Digital Assurance
Stakeholder Theory (Freeman, 1984) asserts that organizations bear moral obligations to a spectrum of
stakeholders whose interests are influenced by corporate actions. In an AI-enabled audit environment,
stakeholders extend beyond shareholders and management to include regulators, employees, customers, and the
broader public who rely on transparent and unbiased financial information.
AI systems can significantly enhance audit quality; however, they also create new ethical vulnerabilitiessuch
as algorithmic opacity and data biasthat may undermine stakeholder confidence. Embedding stakeholder
considerations into AI governance compels auditors to design and deploy systems that are not only efficient but
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 125
also fair, explainable, and socially responsible. Thus, Stakeholder Theory provides the normative backbone of
the EAAF, emphasizing inclusivity, transparency, and trust as guiding principles for AI-assisted auditing.
Ethical Decision-Making Theory: Preserving Auditor Moral Agency
Auditors have traditionally been viewed as moral agents who must exercise professional and ethical judgment.
Rest’s (1986) Ethical Decision-Making Theory delineates four cognitive stagesmoral awareness, moral
judgment, moral intent, and moral behaviourthat collectively explain how ethical decisions are formed. Within
AI-augmented auditing, these stages translate as follows:
Ethical Awareness: Identifying ethical risks inherent in algorithmic design, data selection, or automated
decisions.
Moral Judgment: Evaluating whether AI recommendations align with fairness, transparency, and
independence.
Moral Intent: Prioritizing ethical obligations over organizational pressures for speed or cost efficiency.
Moral Behavior: Taking corrective actions or overriding AI outputs when ethical violations are detected.
This theoretical lens positions the auditor as a moral gatekeeper who supervises and validates AI decisions. Even
when algorithms process evidence autonomously, ethical accountability ultimately rests with the human auditor.
The EAAF therefore insists on maintaining “human-in-command” oversight, ensuring that technology amplifies-
not replaces-moral reasoning within audit judgment.
Technology Governance Theory: Institutionalizing Responsible AI Oversight
Technology Governance Theory (Weill & Ross, 2004) focuses on how organizations distribute authority and
accountability for technology decisions. Applied to internal auditing, it offers a framework for structuring ethical
control over AI systems through defined policies, decision rights, and monitoring mechanisms. Key governance
principles include:
Defined Accountability: Assigning explicit responsibility for AI outputs and audit interpretations.
Data Stewardship: Maintaining integrity, provenance, and confidentiality of datasets feeding AI
analytics.
Oversight Architecture: Establishing audit committees or ethics boards to evaluate AI performance and
compliance.
Feedback Loops: Continuously reviewing and updating AI models to reflect evolving ethical norms and
regulations.
Technology governance ensures that ethical intentions are institutionalized rather than left to individual
discretion. Within the EAAF, it serves as the structural pillar linking ethical principles with operational audit
processes.
The Ethical AI Audit Framework (EAAF): A Conceptual Model for Responsible Assurance:
Framework Overview
Drawing from the above theories, the Ethical AI Audit Framework (EAAF) integrates moral philosophy,
stakeholder accountability, and technological governance into the internal audit lifecycle. The model envisions
auditing as an ethically regulated, technologically intelligent, and socially responsive system. It positions ethical
AI principles as foundational inputs that guide all audit activities toward trustworthy outcomes.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 126
The framework comprises five interdependent layers, forming a continuous ethical loop:
Ethical AI Principles Normative foundation
Internal Audit Lifecycle Phases Operational domain
Governance Enablers Institutional safeguards
Professional Competence Human capability layer
Ethical Outcomes Intended impact and trust generation
Each layer reinforces the others, ensuring that technological efficiency does not outpace ethical accountability.
Layer One Ethical AI Principles: The Normative Compass
At its core, the EAAF is anchored in universally recognized ethical AI principles distilled from the EU (2019),
OECD (2021), and IFAC (2022) guidelines:
Transparency: Audit algorithms must be explainable and auditable by human experts.
Fairness: Data selection and model training must avoid systemic bias or discrimination.
Accountability: Responsibility for AI decisions remains explicitly assigned to human auditors.
Privacy and Confidentiality: Data analytics must comply with privacy laws and professional secrecy.
Integrity: All AI tools should reinforce, not compromise, the auditor’s objectivity and independence.
These principles act as the ethical DNA of AI-driven audit processes, directing system design and human
oversight.
Layer Two Integration across the Internal Audit Lifecycle
Table-2: Ethical considerations permeate every phase of the audit process
Audit
Phase
Ethical Focus
AI Consideration
Planning
Evaluate ethical risks in algorithmic
scope and data inputs.
Model validation and fairness
assessment.
Fieldwork
Ensure informed human supervision
during automated evidence collection.
Continuous monitoring of
algorithmic performance.
Evaluation
Apply critical judgment to interpret AI
outputs.
Explain ability and cross-
verification of findings.
Reporting
Disclose AI usage, assumptions, and
limitations transparently.
Responsible communication of
results.
Follow-Up
Conduct ethical post-audits of AI system
Behavior.
Continuous learning and ethical
refinement.
Source: Author’s Conceptual Compilation (2025)
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 127
This integration transforms the audit lifecycle into a closed-loop ethical systemfrom ethical design to ethical
feedback.
Layer Three Governance Enablers: Building Institutional Ethics
Governance enablers operationalize moral principles through organizational infrastructure. The EAAF
prescribes:
AI Ethics Committees within audit functions to oversee algorithmic performance.
Bias Audits of data models and decision logs to ensure fairness and compliance.
Policy Charters that codify accountability for AI-based audit outcomes.
Documentation Protocols ensuring transparency of data sources, model assumptions, and human
interventions.
These enablers form the compliance backbone of ethical AI auditing, bridging theory with day-to-day practice.
Layer Four Professional Competence: The Human-Technology Nexus
Ethical AI integration is contingent upon the auditor’s dual fluency in ethics and technology. Professional
competence thus encompasses:
AI Literacy: Understanding the logic and limitations of machine learning systems.
Ethical Reasoning: Applying professional judgment to algorithmic recommendations.
Continuous Learning: Updating knowledge as technology and ethical standards evolve.
Collaborative Oversight: Engaging multidisciplinary teams combining data science and audit expertise.
This layer reinforces that the ethical quality of AI auditing ultimately depends on the ethical quality of auditors
themselves.
Layer Five Ethical Outcomes: Rebuilding Trust in the Digital Audit Era
The culmination of the EAAF lies in producing ethical outcomes that reinforce stakeholder confidence and
institutional legitimacy. Expected outcomes include:
Enhanced Transparency and Trustworthiness in audit results.
Improved Accountability through documented oversight and ethical traceability.
Societal Confidence in the fairness and impartiality of AI-assisted assurance.
Sustainable Integrity in organizational governance.
These outcomes reflect the ultimate test of responsible AIits capacity to strengthen, not weaken, ethical
assurance.
Visual Representation: Ethical AI Audit Framework (EAAF)
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 128
Figure-1: Ethical AI Audit Framework (EAAF)
Source: Author’s conceptualization (2025)
Figure 1 depicts the interlinked pathways among ethical principles, governance mechanisms, auditor
competence, and audit lifecycle phasesall converging toward ethical outcomes of transparency, integrity, and
accountability.
Conceptual Propositions
From the EAAF, the following testable propositions are articulated:
P1: Embedding ethical AI principles within audit processes enhances the perceived fairness and
transparency of audit outcomes.
P2: Governance enablers strengthen the relationship between AI adoption and ethical audit compliance.
P3: Auditor competence mediates the link between AI system design quality and audit integrity.
P4: Organizations adopting holistic ethical AI frameworks exhibit higher stakeholder trust and reduced
ethical risk.
These propositions provide fertile ground for future empirical research into responsible AI adoption in auditing.
DISCUSSION
The emergence of Artificial Intelligence (AI) within the audit function has redefined the boundaries of assurance,
risk assessment, and governance. The proposed Ethical AI Audit Framework (EAAF) (see Figure 1) underscores
the convergence between technological innovation and ethical accountability, a balance critical for sustaining
trust in modern audit ecosystems. This discussion situates the EAAF within the broader context of audit
transformation, exploring its theoretical, professional, and institutional implications.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 129
Theoretical Implications
The EAAF aligns with the principles of agency theory, stakeholder theory, and institutional theory.
From an agency perspective, AI enhances the auditor’s capacity to detect misstatements and mitigate information
asymmetry between management and shareholders.
Stakeholder theory reinforces the auditor’s expanded duty toward diverse interest groups—investors, regulators,
and the publicrequiring AI systems that prioritize fairness, explain ability, and accountability.
Institutional theory frames AI adoption as a legitimacy-seeking process: audit firms must conform to emerging
norms of transparency, algorithmic ethics, and professional integrity to maintain credibility in the digital audit
environment.
Professional Implications
For auditors, the integration of AI demands a paradigm shift from procedural compliance to judgment-based
technological governance. The EAAF advocates for three dimensions of ethical competence:
Technical Literacy auditors must comprehend AI algorithms, data analytics, and system outputs;
Ethical Literacy professionals should identify, evaluate, and respond to biases and ethical risks inherent
in automated systems;
Regulatory Literacy auditors must align AI use with standards like ISA 315 (revised), ISA 540, and
evolving AI assurance frameworks.
Ethical auditing with AI transforms the auditor’s role into a hybrid function—a fusion of data scientist, ethicist,
and governance specialist. As ICAI and other professional bodies continue to revise standards, auditors’
accountability will increasingly depend on their ability to ensure algorithmic transparency and data integrity.
Institutional and Regulatory Implications
Globally, regulators are moving toward tech-augmented oversight frameworks. The European Union’s AI Act
(2024), the OECD AI Principles, and India’s National Strategy for AI (NITI Aayog) illustrate the trend toward
ethical codification. In this context, the EAAF provides a conceptual roadmap for integrating ethical principles
into audit practice, guiding regulators to:
Define AI audit assurance standards that specify acceptable levels of bias, transparency, and explain
ability;
Establish AI audit registries to document algorithmic decisions affecting audit judgments; and
Promote cross-sectoral collaboration between professional accountants, data scientists, and ethicists.
By embedding these elements, auditing transitions from a compliance-driven function to a value-driven ethical
governance mechanism.
Research Agenda:
Despite growing literature on AI and auditing, empirical and conceptual gaps persist. To advance the field, future
research can be structured around the following thematic agendas aligned with the Ethical AI Audit Framework
(EAAF).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 130
Table-2: Thematic Research Agenda Aligned with the EAAF
Thematic Area
Key Research Questions
Potential Methodologies
Algorithmic
Accountability in
Auditing
How can auditors assess and document
the decision-making logic of AI tools?
Conceptual modelling; Delphi
studies; design science
Ethical Risk
Management
What ethical risks emerge from AI-
assisted audit analytics, and how can
they be mitigated?
Case studies; grounded
theory; scenario analysis
Audit Quality and
Bias Mitigation
How does AI-driven audit automation
influence professional judgment and
audit quality?
Experimental studies;
behavioral audit research
Regulatory and
Standard-Setting
Perspectives
What institutional mechanisms ensure
responsible AI adoption within
auditing frameworks?
Policy analysis; comparative
regulatory studies
Auditor
Education and
Competency
Frameworks
What AI-related competencies are
required for ethical auditing, and how
can they be integrated into ICAI
curricula?
Survey research; curriculum
design evaluation
Explainable AI in
Assurance
How can explainable AI enhance
transparency and stakeholder trust in
audit outcomes?
Algorithmic audit modelling;
machine learning
interpretability studies
Source: Author’s conceptual synthesis (2025)
Integrating AI Ethics into Auditing Pedagogy
Academic and professional training institutions should incorporate AI ethics modules within auditing syllabi.
Case-based pedagogy, simulations, and collaborative labs can foster auditor competence in handling ethical
dilemmas in AI use.
Future Empirical Models
Empirical validation of the EAAF could employ structural equation modelling (SEM) or system dynamics
simulations to test causal links between ethical AI governance and audit effectiveness. These studies could
further inform ICAI’s evolving Digital Audit Framework.
CONCLUSION
AI’s infusion into auditing represents both an opportunity and a moral test for the profession. While algorithmic
tools enhance efficiency, predictive accuracy, and fraud detection, they simultaneously challenge traditional
notions of professional judgment, independence, and responsibility. The Ethical AI Audit Framework (EAAF)
proposed in this study provides a theoretical scaffold for integrating technological capability with ethical
stewardship. The framework emphasizes that responsible auditing with AI is not merely about technological
adoption but about fostering trust, transparency, and accountability in digital decision-making. The auditor of
the future must act not only as an assessor of financial truth but also as a custodian of ethical intelligence ensuring
that AI serves humanity’s collective interest rather than corporate expediency. Ultimately, the transition to
ethical AI auditing will depend on institutional reform, standard evolution, and interdisciplinary collaboration.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 131
As ICAI and global regulators move toward integrating AI into auditing standards, conceptual frameworks like
EAAF will play a pivotal role in shaping the next generation of audit ethics and reinforcing the credibility of the
profession in an era defined by intelligent automation.
REFERENCES
1. Abbott, L., Daugherty, B., Parker, S., & Peters, G. (2022). Artificial intelligence, auditing, and the ethical
implications of automation. Journal of Accounting Literature, 49(1), 101119.
https://doi.org/10.1016/j.acclit.2021.10.004
2. AICPA. (2023). Audit data analytics guide: Integrating AI into assurance practices. American Institute
of Certified Public Accountants.
3. Brynjolfsson, E., & McAfee, A. (2017). Machine, platform, crowd: Harnessing our digital future. W. W.
Norton & Company.
4. Brown-Liburd, H., Issa, H., & Lombardi, D. (2021). Behavioral implications of Big Data’s impact on
audit judgment and decision making and future research directions. Accounting Horizons, 35(4), 6389.
https://doi.org/10.2308/horizons-19-032
5. Cao, M., Chychyla, R., & Stewart, T. (2015). Big data analytics in financial statement audits. Accounting
Horizons, 29(2), 423429. https://doi.org/10.2308/acch-51068
6. Davenport, T. H., & Kirby, J. (2021). Designing trustworthy AI: A human-centered approach to
developing ethical and robust artificial intelligence systems. MIT Press.
7. European Commission. (2019). Ethics guidelines for trustworthy AI. High-Level Expert Group on
Artificial Intelligence. https://ec.europa.eu/digital-strategy
8. Freeman, R. E. (1984). Strategic management: A stakeholder approach. Pitman.
9. IAASB. (2022). Technology and the future of audit: Discussion paper. International Auditing and
Assurance Standards Board.
10. IAI (Institute of Chartered Accountants of India). (2024). Digital audit framework and ethical guidelines
for AI-assisted auditing. ICAI Publications.
11. ICAEW. (2021). Artificial intelligence and the future of accountancy. Institute of Chartered Accountants
in England and Wales. https://www.icaew.com
12. IFAC. (2022). International Code of Ethics for Professional Accountants (including International
Independence Standards). International Federation of Accountants. https://www.ifac.org
13. IIA. (2023). International Professional Practices Framework (IPPF). The Institute of Internal Auditors.
https://www.theiia.org
14. Issa, H., Sun, T., & Vasarhelyi, M. A. (2016). Research ideas for artificial intelligence in auditing: The
formalization of audit and workforce supplementation. Journal of Emerging Technologies in Accounting,
13(2), 120. https://doi.org/10.2308/jeta-10511
15. Kokina, J., & Davenport, T. (2017). The emergence of artificial intelligence: How automation is
changing auditing. Journal of Emerging Technologies in Accounting, 14(1), 115122*.
https://doi.org/10.2308/jeta-51730
16. Kokina, J., & Davenport, T. (2017). The emergence of artificial intelligence: How automation is
changing auditing. Journal of Accountancy, 224(6), 3439.
17. Lombardi, D., Bloch, R., & Vasarhelyi, M. (2022). The future of audit: The impact of artificial
intelligence. International Journal of Accounting Information Systems, 45, 100573.
https://doi.org/10.1016/j.accinf.2022.100573
18. Luo, Q., Meng, Q., Cai, D., & Gan, H. (2022). Artificial intelligence in auditing: Ethical challenges and
governance frameworks. Journal of Business Ethics, 178(3), 665682. https://doi.org/10.1007/s10551-
021-04862-w
19. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms:
Mapping the debate. Big Data & Society, 3(2), 121. https://doi.org/10.1177/2053951716679679
20. NITI Aayog. (2023). Responsible AI for all: Indian strategy on ethical artificial intelligence. Government
of India.
21. OECD. (2021). OECD principles on artificial intelligence. Organisation for Economic Co-operation and
Development. https://oecd.ai
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025
www.ijltemas.in Page 132
22. OECD. (2023). AI principles and policy frameworks for ethical governance. Organisation for Economic
Co-operation and Development.
23. Rest, J. R. (1986). Moral development: Advances in research and theory. Praeger.
24. Sutton, S. G., Holt, M., & Arnold, V. (2016). The role of AIS in organizational learning and ethics:
Toward a smart audit environment. International Journal of Accounting Information Systems, 21(3), 1
15. https://doi.org/10.1016/j.accinf.2016.06.002
25. Tschakert, N., Kokina, J., Kozlowski, S., & Vasarhelyi, M. (2021). The next frontier in auditing:
Artificial intelligence and machine learning. Accounting Horizons, 35(4), 135156.
https://doi.org/10.2308/AH-2020-051
26. Vasarhelyi, M. A., Kogan, A., & Tuttle, B. (2015). Big data in accounting: An overview. Accounting
Horizons, 29(2), 381396.
27. Weill, P., & Ross, J. W. (2004). IT governance: How top performers manage IT decision rights for
superior results. Harvard Business School Press.
28. World Economic Forum. (2024). Ethical AI in assurance and financial governance. World Economic
Forum.
29. Yoon, K. P., Hoogduin, L., & Zhang, L. (2015). Big data as complementary audit evidence. Accounting
Horizons, 29(2), 431438. https://doi.org/10.2308/acch-51076