Page 596
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
An Integrated Ethical Governance Framework for AI-Driven
Business Decision-Making: AIIA, Explainable AI Contracts, Ethics-
By-Design, and Algorithmic Sustainability Indices
Chinoso Job, Chukwudi Jeremiah Paul, Ifesinachi Ignatius Nwankwo, Chukwu Nelson Okwudi
Information Technology University of Port Harcourt
DOI: https://doi.org/10.51583/IJLTEMAS.2026.150400055
Received: 11 April 2026; Accepted: 16 April 2026; Published: 07 May 2026
ABSTRACT
Existing AI regulatory frameworks, including the EU AI Act, the General Data Protection Regulation (GDPR),
and industry standards such as IEEE Ethically Aligned Design and ISO/IEC 42001, have demonstrated structural
inadequacy in preventing ethical failures arising from AI-driven business decision-making. Responding to these
documented deficiencies, this paper proposes and evaluates an Integrated Ethical AI Governance Framework
(IEAGF) comprising four novel, complementary mechanisms: (1) Pre-Deployment AI Impact Assessments
(AIIA), which mandate bias auditing, fairness evaluation, and stakeholder impact mapping before system
deployment; (2) Explainable AI with Algorithmic Contracts (XAI-AC), which legally bind AI systems to defined
behavioural parameters and transparency obligations; (3) Ethics-by-Design (EbD) Frameworks, which embed
ethical principles, fairness constraints, and stakeholder inclusivity into AI development lifecycles; and (4)
Algorithmic Sustainability Indices (ASI), which introduce standardised metrics for quantifying the energy
consumption, socioeconomic impact, and renewable infrastructure usage of AI deployments. The IEAGF is
evaluated against established practicability criteria across sectors including finance, healthcare, and logistics.
Feasibility analysis demonstrates that the framework is implementable across organisational scales, aligns with
existing ESG disclosure obligations, and provides regulators with enforceable technical benchmarks absent from
current frameworks. The IEAGF represents a shift from reactive compliance to preventive ethical governance,
grounded in both technical operationalisability and institutional accountability.
KeywordsAI governance, ethics-by-design, explainable AI, AI impact assessment, algorithmic sustainability,
GDPR, ESG, ethical AI, algorithmic contracts, responsible AI deployment
INTRODUCTION
The proliferation of AI-driven decision-making systems in commercial, financial, and public-sector contexts has
exposed a critical governance gap: the absence of preventive ethical infrastructure operating at the design and
pre-deployment stages of AI development. A companion analysis [1] establishes, through rigorous case-study
examination of Clearview AI, Facebooks advertising algorithm, and Ubers management system, that
documented harms from these systems were structurally enabled by reactive governance mechanisms that
activate only after deployment and documented harm have occurred.
This reactive-compliance paradigm—embodying the principle of deploy first, regulate later”—is insufficient
for AI systems that can generate harm at scale before any human review is triggered. The GDPRs right to
explanation (Article 22), the EU AI Act’s risk-classification requirements, and the IEEEs advisory guidelines
all operate as post-facto corrections to systems that have already been designed, trained, and deployed. None
mandates that ethical considerations be integrated into the algorithmic development process itself.
This paper addresses this gap by proposing the Integrated Ethical AI Governance Framework (IEAGF), a four
component architecture designed to:
Intercept ethical failures at the design and pre-deployment stages;
Page 597
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Provide legally enforceable mechanisms for algorithmic accountability and transparency;
Embed fairness and inclusivity into AI development practices through structured ethical design;
Quantify and regulate the environmental and social sustainability impacts of AI systems.
Each component is grounded in documented pilot implementations and evaluated for practicability, scalability,
and regulatory complementarity.
Framework Architecture: Ieagf Overview
The IEAGF is structured as a lifecycle-spanning governance architecture operating across four functional stages:
pre-development, development, pre-deployment, and postdeployment monitoring. Fig. 1 presents the complete
framework architecture.
Component 1: Pre-Deployment Ai Impact Assessments (Aiia)
A. Rationale
Fig. 1. Integrated Ethical AI Governance Framework (IEAGF) four-component lifecycle architecture.
Environmental Impact Assessments (EIAs) have been a prerequisite for major infrastructure projects in most
jurisdictions for decades. The AI Impact Assessment (AIIA) applies this proven preventive governance logic to
AI systems, mandating structured evaluation of potential harms before deployment authorisation [2].
B. AIIA Process Architecture
The AIIA comprises three mandatory evaluation modules, as illustrated in Fig. 2.
Module 1: Algorithmic Bias Testing
(Disparate impact analysis; AUC by subgroup)
Module 2: Fairness Audit
Feedback loop
|
|
|
|
Page 598
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Fig. 2. AIIA three-module evaluation process.
C. Implementation Evidence and Feasibility
AIIA protocols have been piloted in financial services (Goldman Sachs applied bias testing to its Apple Card
credit algorithm following gender discrimination complaints) and healthcare AI deployments under NHS
governance requirements. Feasibility challenges for small and medium enterprises (SMEs) include technical
capacity constraints and audit costs. The paper recommends government-subsidised AIIA support programmes
for SMEs operating in high-risk AI categories, analogous to existing regulatory compliance support schemes [3].
D. Regulatory Complementarity
The AIIA directly addresses the EU AI Act’s limitation of not mandating pre-deployment bias audits for high-
risk systems. By formalising AIIA as a regulatory prerequisite, it converts the Acts risk-classification framework
from a descriptive taxonomy to an actionable governance instrument [12].
Component 2: Explainable Ai with Algorithmic Contracts (Xai-Ac)
A. Rationale
GDPR Article 22 establishes the right of individuals to contest automated decisions but provides no mechanism
for enforcing meaningful explanation. Black-box models—particularly deep neural networks operating in
financial credit, employment screening, and insurance pricing—routinely make consequential decisions that
affected individuals cannot understand or contest [4].
B. Algorithmic Contract Architecture
An Algorithmic Contract (AC) is a legally binding instrument specifying: (i) the model’s intended scope and
prohibited applications; (ii) minimum interpretability standards (e.g., SHAP value disclosure, feature importance
thresholds); (iii) performance parameters (accuracy, false positive rate by demographic subgroup); and (iv) audit
rights for regulatory bodies and affected individuals [5].
Page 599
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
C. XAI Technical Requirements
The XAI-AC framework requires that AI systems deployed in contexts with legal or financial consequences
prioritise interpretable model architectures (decision trees, generalised additive models, logistic regression)
where feasible, or provide post-hoc explanations (SHAP, LIME, ANCHOR) where deep learning is operationally
necessary. Model cards—structured documentation of model performance, limitations, and intended use
become legally required rather than optional disclosures [6].
TABLE I Algorithmic Contract: Mandatory Specification Fields
Field
Specification Requirement
Scope
Defined use cases; prohibited applications
Interpretability
SHAP / LIME explanations; minimum threshold
Fairness
Accuracy and false positive rate by demographic group
Audit rights
Regulator access; individual right to explanation
Review cycle
Minimum annual performance and fairness re-certification
Liability
Named responsible officer; escalation procedures
Component 3: Ethics-By-Design (Ebd) Frameworks
A. Rationale
The prevailing compliance-first paradigm treats ethical requirements as constraints added to AI systems after
development, typically in response to regulatory requirements or public controversy [7]. Ethics-by-Design (EbD)
inverts this sequence, embedding ethical principles, fairness constraints, and stakeholder considerations into the
earliest stages of AI system specification and design [8].
B. EbD Implementation Architecture
of Ethics and Professional Conduct; and ISO/IEC 42001 (AI Management Systems) [9]. The IEAGF elevates
these advisory standards into required design documentation, creating an audit trail that regulatory bodies can
verify.
Component 4: Algorithmic Sustainability Indices (Asi)
A. Rationale
The training and inference operations of large-scale AI systems impose significant environmental costs that
current governance frameworks universally ignore. Training GPT3 consumed an estimated 1,287 MWh of
electricity, generating approximately 552 tonnes of CO
2
equivalent [10]. At enterprise scale, the cumulative
environmental impact of commercial AI deployment represents a material sustainability risk that is absent from
corporate ESG disclosures [11].
B. ASI Metric Architecture
The ASI defines three quantifiable indices:
TABLE IIAlgorithmic Sustainability Index Metric Definitions
Index
Definition
Measurement Protocol
Energy Efficiency
Index (EEI)
Measures the computational
performance of an AI system
Calculated as performance output per kWh;
assessed during both training and inference stages;
Page 600
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
relative to its energy
consumption.
results benchmarked against established industry
baselines.
Socioeconomic
Impact Index (SII)
Evaluates the economic and social
outcomes of AI deployment on
stakeholders and communities.
Measured through indicators such as net
employment changes, wage equity, and local
economic impact; data collected using structured
stakeholder economic surveys.
Fig. 3. Ethics-by-Design implementation architecture.
1. Diverse Design Team(Technical + legal + ethical + affected communitiesRenewable ) Percentage of
AI compute infrastructure pow-Infrastructure ered by certified renewable energy sources Index (RII)
2. Fairness Constraints Specification(Before model training)
3. Counterfactual Reasoning(Simulate unintended outcomesC. ESG Integration and Regulatory Case)
IterateASI metrics map directly onto existing ESG disclosure
4. Multi-Stakeholder Review(Before development completionframeworks (GRI Standards, SASB, TCFD),
enabling integra-) tion into annual corporate sustainability reports. The business case for ASI adoption is
strengthened by investor demand
5. Continuous Ethical Monitoring(Post-deployment drift detectionfor AI-related ESG disclosure, emerging
mandatory climate)
EbD operationalises three key design principles: (i) Value Alignmentensuring the AI system’s objective
function reflects the values of all affected stakeholders, not only the deploying organisation; (ii) Inclusivity
ensuring training data and system design processes actively include underrepresented communities; and (iii)
Robustness—ensuring the system behaves ethically under distributional shift and adversarial conditions [8].
C. Professional Standards Alignment
EbD is directly supported by three major professional standards: IEEE Ethically Aligned Design (2019), which
advocates for human wellbeing as a primary design criterion; ACM Code risk reporting requirements, and
reputational risk management [11]. Regulators can incorporate ASI thresholds into AI system authorisation
criteria, creating incentives for energyefficient model architectures and green compute infrastructure
investments.
Page 601
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Framework Evaluation
A. Practicability Assessment B. Comparison with Current Frameworks
As shown in Fig. 4 and Table III, the IEAGF substantially improves governance coverage across all four
identified failure dimensions relative to current regulatory and industry frameworks. The most significant
improvement is in sustainability governance (0 to 5), where existing frameworks offer no coverage. Regulatory
fragmentation is addressed through AIIA’
Coverage Score (05)
Fig. 4. Comparative coverage of current frameworks versus IEAGF across four failure dimensions.
cross-jurisdictional applicability and Algorithmic Contract’s legal enforceability, though full resolution requires
international treaty-level coordination beyond any single national framework.
TABLE III IEAGF COMPONENT FEASIBILITY ASSESSMENT BY SECTOR AND
ORGANISATION SCALE
Component
Large Enterprises
SMEs
Healthcare
Finance
AIIA
Highly feasible;
existing risk
teams
Requiressupport;
cost
barrier
Feasibleunder NH
S
governance
Partially deploye
d
(credit bias
audit)
XAI-AC
Feasible; legal
infrastructure
exists
Moderate;
requires legal
support
Highly feasible
(clinical
explainability
demands)
Feasible;
regulator
y precedent
EbD
Feasible;
integrates into
SDLC
Moderate;
methodology
support needed
Highly feasible;
patient safety
culture
Feasible; compli
ance
culture
ASI
Feasible;
aligns
with ESG
reporting
Moderate;
measurement
tools required
Low priority
currently
Growing relevan
ce
(emissions
targets)
0
1
2
3
4
5
1
2
2
2
5
4
4
5
Page 602
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
DISCUSSION
A. Addressing the Reactive-Preventive Governance Gap
The IEAGF represents a fundamental architectural shift in AI governance philosophy. By mandating ethical
considerations at the design stage (EbD), enforcing pre-deployment evaluation (AIIA), creating legally binding
behavioural contracts (XAI-AC), and establishing continuous sustainability accountability (ASI), the framework
operationalises preventive governance as a structural requirement rather than an aspirational principle.
This approach is analogous to the shift in pharmaceutical regulation from post-market surveillance to pre-market
clinical trial requirements. The pharmaceutical analogy is instructive: AI systems that make consequential
decisions about human life outcomes (credit allocation, medical diagnosis, employment screening) warrant a
comparable standard of predeployment evidence.
B. Limitations
The IEAGF faces three implementation challenges. First, its technical requirements demand expertise that many
regulatory bodies and SMEs currently lack. Second, AIIA certification processes introduce development timeline
extensions that may disadvantage regulated organisations relative to entities operating in laxer jurisdictions.
Third, algorithmic contracts require legal precedents that do not yet exist in most jurisdictions, necessitating
legislative action before enforcement is possible.
C. Future Research Directions
Priority research directions include: development of standardised AIIA methodologies for specific sectors
(healthcare, finance, legal); design of open-source toolkits for EbD implementation in Agile development
environments; creation of ASI benchmarking databases enabling cross-organisation sustainability comparisons;
and exploration of international treaty mechanisms for cross-border IEAGF enforcement.
CONCLUSION
This paper has proposed, detailed, and evaluated the Integrated Ethical AI Governance Framework (IEAGF) as
a response to the structural inadequacies of existing AI governance identified in the companion analysis. The
four components—AIIA, XAI-AC, EbD, and ASI—collectively address regulatory fragmentation, algorithmic
bias, transparency deficits, and sustainability failures through preventive mechanisms that operate across the AI
development lifecycle.
Feasibility analysis confirms that each component is implementable within existing legal, technical, and
organisational infrastructures, with regulatory and professional standards support already in place for EbD and
XAI-AC. The IEAGF provides governments, regulatory bodies, and AI-deploying organisations with a practical,
evidence-based blueprint for governing AI systems in a manner that is technically rigorous, ethically
comprehensive, and legally enforceable.
REFERENCES
1. M. N. Ibe, “Ethical and regulatory failures in AI-driven business decision-making: A critical case-study
analysis,in Proc. IEEE Conf., 2025.
2. B. C. Stahl et al., “A systematic review of artificial intelligence impact assessments,Artif. Intell. Rev.,
vol. 56, no. 11, pp. 12799–12831, 2023.
3. E. Bogucka et al., Co-designing an AI impact assessment report template with AI practitioners and
compliance experts,in Proc. AAAI/ACM Conf. AI, Ethics, Soc., 2024, vol. 7, pp. 168180.
4. E. Bayamlioglu, “The right to contest automated decisions under the GDPR: Beyond the so-called right
to explanation,Regul. Gov., vol. 16, no. 4, pp. 10581078, 2022.
5. Z. Zodi, “Algorithmic explainability and legal reasoning,Theory Pract.
6. Legis., vol. 10, no. 1, pp. 67–92, 2022.
Page 603
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
7. G. Chaudhary, Unveiling the black box: Bringing algorithmic transparency to AI,Masaryk Univ. J.
Law Technol., vol. 18, no. 1, pp. 93–122, 2024.
8. E. Thomann and F. Sager, Innovative Approaches to EU Multilevel Implementation: Moving Beyond
Legal Compliance. New York: Routledge, 2019.
9. P. Brey and B. Dainow, Ethics by design for artificial intelligence,AI Ethics, vol. 4, no. 4, pp. 1265
1277, 2024.
10. V. Sridharan, “Ethical AI integration in enterprise resource planning systems: A framework for balancing
innovation and responsibility in B2B environments,J. Comput. Sci. Technol. Stud., vol. 7, no. 5, pp.
489–504, 2025.
11. O. Campesato, Large Language Models: An Introduction, 1st ed. Boston: David Pallai, 2024.
12. I. Khan and F. Hou, “The impact of socio-economic and environmental sustainability on CO
2
emissions,
Soc. Indic. Res., vol. 155, no. 3, pp. 1045–1076, 2021.
13. P. De Almeida, C. dos Santos, and J. Farias, Artificial intelligence regulation: A framework for
governance,Ethics Inf. Technol., vol. 23, no. 3, pp. 505525, 2021.
14. I. D. Raji et al., Closing the AI accountability gap: Defining an end-toend framework for internal
algorithmic auditing,in Proc. ACM Conf. Fairness, Accountability, Transparency, New York: ACM,
2020.
15. A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,Nat. Mach. Intell.,
vol. 1, no. 9, pp. 389–399, 2018.