Page 136
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Faculty Readiness for AI-Supported Teaching and Scalable Online
Program Delivery in Higher Education: The EPIQ-AI Framework
for Epistemic Integrity
Sixbert Sangwa¹*, Claver Ndahayo², Fabrice Dusengumuremyi³, Placide Mutabazi⁴
¹Department of International Business and Trade, African Leadership University, Kigali, Rwanda
²Office of the Deputy Vice-Chancellor, Adventist University of Central Africa, Kigali, Rwanda
³Department of Entrepreneurial Leadership, African Leadership University, Kigali, Rwanda
⁴Office of the Executive Chancellor, Open Christian University, California, United States
*Corresponding Author
DOI:
https://doi.org/10.51583/IJLTEMAS.2026.150400012
Received: 08 April 2026; 13 April 2026; Published: 02 May 2026
ABSTRACT
Background: Higher education institutions are expanding online delivery and integrating generative artificial
intelligence (GenAI), yet faculty readiness remains uneven, raising concerns about assessment validity,
academic integrity, institutional legitimacy, and the quality of scalable online provision.
Objective: This study develops the EPIQ-AI Readiness Framework, a multidimensional model that defines
readiness for AI-supported teaching and online higher education across four aligned domains: epistemic,
pedagogical, institutional, and quality-and-compliance readiness.
Methods: Using an integrative secondary evidence synthesis, the study triangulates recent official statistics,
large-scale faculty and institutional surveys, peer-reviewed studies, and policy frameworks published between
2020 and 2025. The analysis is organized across four readiness domains: epistemic, pedagogical, institutional,
and quality-and-compliance readiness.
Results: The evidence converges on four main findings. First, faculty adoption of AI is increasingly widespread,
but confidence, pedagogical clarity, and depth of use remain limited. Second, institutional ambitions for online
scale and AI integration are advancing faster than policy maturity, professional development, and support
capacity. Third, assessment has become the central pressure point, with growing evidence that detection-centered
academic integrity regimes are unreliable, potentially biased, and insufficient for high-stakes decisions. Fourth,
faculty readiness is best understood not as an individual skills deficit but as a sociotechnical alignment problem
shaped by governance, incentives, workload, literacy, course design support, and equity-sensitive
implementation.
Conclusions: The EPIQ-AI framework reframes readiness as a multidimensional condition for credible AI-
enabled and online higher education by aligning epistemic judgment, pedagogical competence, institutional
support, and quality-and-compliance safeguards. It offers a theoretically grounded and operationally actionable
model for institutions seeking to strengthen AI literacy, redesign assessment, improve governance, and sustain
epistemic integrity while advancing scalable, policy-compliant online delivery.
Keywords: generative artificial intelligence; faculty readiness; AI-supported teaching; online program delivery;
higher education; assessment redesign; academic integrity
Page 137
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
INTRODUCTION
Background Context: Distance education has shifted from peripheral provision to a mainstream modality, with
U.S. national enrollment data indicating that 54.2% of students at degree-granting postsecondary institutions
took at least one distance education course in fall 2023 and 26.1% were enrolled exclusively in distance
education (National Center for Education Statistics, 2025). In parallel, online learning leaders report institutional
prioritization of online versions of on-campus courses (69%) and degrees (65%), indicating that online expansion
is not merely a pandemic residue but an ongoing strategic orientation (Simunich et al., 2024). This broader
reorientation is consistent with post-COVID analyses of higher education digital transformation that continue to
foreground adoption, quality assurance, and governance challenges rather than mere technological substitution
(Sangwa, Butera, & Mutabazi, 2025). Meanwhile, GenAI has moved from novelty to infrastructural presence in
educational work, prompting policy bodies to emphasize governance, risk management, and educator capacity
development rather than ad hoc reactions (U.S. Department of Education, Office of Educational Technology,
2023; UNESCO, 2023).
Problem Statement: Higher education is expanding online delivery and experimenting with AI, but the
evidentiary landscape reveals a mismatch between strategic ambition and faculty readiness. Global faculty
survey evidence indicates that a majority of faculty have used AI in teaching (61%), yet barriers center on
time/resource scarcity (40%) and uncertainty about how to use AI pedagogically (38%) (Digital Education
Council, 2025). In U.S. national instructor evidence, large majorities have experimented with GenAI (72%),
while only small minorities report confidence in using it instructionally (14%) or understanding teaching
applications (18%) (Ruediger et al., 2024). Simultaneously, institutions report uneven policy maturity for student
AI use (only 35% with institution-wide policies; 40% still discussing) and persistent cultural and operational
barriers, with faculty autonomy identified as a primary constraint on online initiatives (Simunich et al., 2024).
The core problem is therefore not whether AI and online delivery will expand, but whether expansion can occur
without degrading epistemic standards, assessment validity, and institutional legitimacy.
Research Objective: The overarching objective is to develop the EPIQ-AI Readiness Framework, a
theoretically grounded and operationally actionable model that conceptualizes readiness across four
interdependent domains: epistemic readiness, pedagogical readiness, institutional readiness, and quality-and-
compliance readiness for AI-supported teaching and online program delivery. In this framework, readiness is
treated not as a narrow matter of individual technical skill, but as the institutional and pedagogical capacity to
sustain credible knowledge production, valid assessment, and scalable online provision in an AI-impacted higher
education environment.
Specific objectives are to (a) synthesize the best available multi-source evidence on readiness distributions and
determinants; (b) identify capacity-building priorities with immediate operational value; (c) derive propositions
linking institutional supports to faculty adoption and assessment integrity; and (d) propose an impact assessment
framework to guide staged implementation.
Research Questions: RQ1: What does recent robust secondary evidence indicate about the distribution of
faculty readiness for AI-supported teaching and online program delivery (confidence, competence, and adoption
depth)? RQ2: Which institutional supports (policy clarity, training, tools, infrastructure, incentives) most
consistently align with readiness gaps and adoption barriers? RQ3: How is GenAI reshaping assessment and
academic integrity risks, and which governance responses are supported by converging evidence?
Significance and Contribution of the Study: The principal contribution is the EPIQ-AI Readiness Framework,
which reframes readiness as a sociotechnical and epistemic condition required for credible online scale and AI-
enabled pedagogy, integrating technology adoption theory, teacher knowledge theory, and organizational
readiness for change (Davis, 1989; Mishra & Koehler, 2006; Venkatesh et al., 2003; Weiner, 2009).
Operationally, the framework specifies measurable readiness domains and capacity-building priorities that
institutional leadership can implement prior to large-scale rollout, directly addressing policy and practice gaps
identified in the latest sector evidence (Digital Education Council, 2025; Robert & McCormack, 2024; Simunich
et al., 2024).
Page 138
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
LITERATURE REVIEW AND ANALYTICAL FRAMEWORK
Conceptual Clarification and Scope of the Field: “Online program delivery” is not merely a technological
format but a regulated educational condition. Under U.S. federal regulation, distance education must support
regular and substantive interaction” between students and instructors, distinguishing it from correspondence
education (eCFR, n.d.; U.S. Department of Education, 2020). “AI-supported teaching” is treated here as the use
of AI systems (including GenAI and related analytics) to design learning activities, generate or critique content,
provide feedback, or support instructional decisions. A policy-relevant definition frames AI as automation based
on associations that shifts edtech from content access toward pattern detection and automation of decisions,
thereby amplifying risks of bias and unfairness and necessitating governance (U.S. Department of Education,
Office of Educational Technology, 2023). “Faculty readiness” is conceptualized as a multidimensional construct
spanning epistemic understanding, pedagogical competence, motivational orientation, and perceived and actual
institutional supportconsistent with empirical work showing that readiness for online teaching is multifaceted
and heterogeneous rather than uniform across faculty populations (Scherer et al., 2021).
State of Scholarship and Major Empirical Debates: Two debates dominate the recent scholarship (Scherer et
al., 2021; Xia et al., 2024). The first concerns whether readiness is primarily an individual attribute (skills,
confidence) or an emergent organizational condition (support, culture, governance). Online teaching readiness
research shows distinct teacher readiness profiles (high, low, and inconsistent readiness) and identifies
institutional preparation and contextual factors as significant determinants, undermining one-size-fits-all”
training assumptions (Scherer et al., 2021). The second debate concerns whether GenAI’s primary challenge is
detectability (policing student work) or assessability (redesigning tasks to preserve validity). Recent evidence
increasingly favors the assessment-redesign view: a scoping review of GenAI and higher education assessment
(969 records screened; 32 empirical studies included) argues for transforming assessment toward self-regulated
learning and integrity, supported by teacher professional development and institutional policy rethinking (Xia et
al., 2024). This aligns with global faculty perceptions that current evaluation methods require significant change,
including calls for urgent revamps (Digital Education Council, 2025).
Comparative Review of the Most Relevant Theoretical Traditions:
Technology Acceptance Traditions (TAM/UTAUT): Technology acceptance models emphasize perceived
usefulness and ease of use (TAM) and extend toward social influence and facilitating conditions (UTAUT)
(Davis, 1989; Venkatesh et al., 2003). These theories explain why tool exposure” does not automatically
translate into adoption. Yet, for AI-supported teaching, acceptance models are incomplete: they do not
adequately theorize epistemic stakes (truth, justification, authorship) or assessment legitimacy, which are central
to higher education’s social function.
Teacher Knowledge Traditions (TPACK): Technological Pedagogical Content Knowledge (TPACK)
conceptualizes the situated integration of technological, pedagogical, and content knowledge (Mishra & Koehler,
2006). Online teaching readiness research operationalizes readiness partly through TPACK self-efficacy and
shows that readiness comprises instructional presence, institutional support perceptions, and TPACK-related
competence (Scherer et al., 2023). However, GenAI introduces qualitatively new issues (e.g., synthetic text that
is plausible but potentially untrue), pushing beyond “technology integration” toward epistemic governance.
Organizational Readiness for Change: Organizational readiness for change frames readiness as a collective
state shaped by change commitment and change efficacy, predicting implementation quality and sustainability
(Weiner, 2009). This tradition better fits AI and online scale because readiness depends on governance,
resources, workflow redesign, and shared norms. Yet, it still requires an education-specific articulation of
readiness that includes assessment validity and epistemic integrity as core institutional goods.
Synthesis of the Literature and Derivation of the Analytical Framework: The evidence suggests that AI-
supported teaching readiness cannot be fully explained by acceptance (motivation) or knowledge (competence)
alone, nor by organizational capacity in abstraction from epistemic and assessment stakes. UNESCO’s global
guidance explicitly frames GenAI as raising profound implications for how knowledge is understood and how
Page 139
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
learning is assessed and validated, grounding governance in human agency, inclusion, and accountability
(UNESCO, 2023). In parallel, sector-level risk governance norms (e.g., NIST AI RMF) emphasize practical risk
management for AI deployment and the need to benefit from AI while protecting against harms (National
Institute of Standards and Technology, 2023). These converging strands motivate an integrated framework:
readiness must be treated as EPIQEpistemic, Pedagogical, Institutional, and Quality-and-compliance
alignment.
Conceptual Model and Propositions: The EPIQ-AI Readiness Framework posits that credible AI-supported
teaching and scalable online delivery require alignment across micro-level faculty capacities and macro-level
institutional systems, mediated by pedagogical and assessment design capacity. Figure 1 presents the conceptual
architecture of the EPIQ-AI readiness framework.
Figure 1. EPIQ-AI Readiness Framework for AI-Supported Teaching and Scalable Online Program
Delivery. A conceptual framework integrating technology acceptance, teacher knowledge, and organizational
readiness traditions into four aligned readiness domains: epistemic, pedagogical, institutional, and quality-and-
compliance readiness.
Page 140
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
METHODOLOGY
Research Design: The study uses an integrative secondary evidence synthesis design, appropriate for policy-
relevant questions spanning heterogeneous evidence types (official statistics, sector surveys, peer-reviewed
empirical studies, and governance frameworks). This design is appropriate for questions that require the
integration of heterogeneous empirical, policy, and conceptual materials in order to produce an analytically
coherent synthesis and a new conceptual framework rather than a pooled statistical estimate (Torraco, 2005;
Whittemore & Knafl, 2005).
Study Scope and Review Boundaries: The thematic scope covers (a) faculty readiness for online teaching and
online program delivery, (b) faculty readiness for GenAI-supported teaching, and (c) assessment and academic
integrity in AI-impacted higher education. The temporal focus prioritizes 20202025 evidence for modality
expansion and GenAI, while incorporating seminal theoretical works where necessary for framework
construction.
Data Sources: Evidence draws from four source classes: (1) official datasets and definitions (NCES IPEDS
distance education participation; U.S. federal definitions for distance education and RSI); (2) multi-institutional
sector surveys (Digital Education Council global faculty survey; Ithaka S+R U.S. instructor survey analysis;
CHLOE chief online learning officer survey); (3) peer-reviewed studies and reviews (online teaching readiness
profiling; GenAI assessment scoping review; AI integrity systematic review; AI detection evaluations and bias
studies); and (4) authoritative policy frameworks such as UNESCO guidance; NIST AI RMF; U.S. Department
of Education OET guidance; EU-level regulatory summaries via European Commission and Council press
releases (National Center for Education Statistics, 2025; Ruediger et al., 2024; Simunich et al., 2024; Balalle &
Pannilage, 2025; Elkhatat et al., 2023; Liang et al., 2023; UNESCO, 2023; U.S. Department of Education, 2020;
U.S. Department of Education, Office of Educational Technology, 2023).
Search Strategy and Source Identification: Source identification combined targeted retrieval of (a) latest
sector surveys with explicit sample descriptions, (b) open-access peer-reviewed articles in established publishers
(Elsevier, Springer Nature), and (c) official government or intergovernmental publications and regulations.
Search terms centered on “faculty readiness,” online teaching readiness,” “generative AI teaching,”
assessment redesign,” “academic integrity,” “regular and substantive interaction,” and “AI detection bias.”
Eligibility, Quality Appraisal, and Evidence Selection: Inclusion prioritized (a) transparent methodology and
sampling, (b) direct relevance to higher education, and (c) authoritative provenance (government statistics,
intergovernmental guidance, or peer-reviewed outlets). AI detection evidence was included only where
limitations and bias were documented by primary or peer-reviewed sources, given the high-stakes nature of
academic misconduct adjudication (Elkhatat et al., 2023; Liang et al., 2023; OpenAI, 2023).
3.6. Data Extraction and Organization: Extracted elements included sample frames, adoption prevalence,
readiness indicators (confidence, clarity, training adequacy), institutional strategy signals (online priorities,
barriers, budgets), and assessment/integrity concerns. These elements were organized using the EPIQ domains
(Epistemic, Pedagogical, Institutional, Quality-and-compliance).
Analytical Techniques: Analysis proceeded through cross-source triangulation and theory-driven thematic
synthesis, mapping convergent findings into the EPIQ domains and using propositions to link determinants and
outcomes. The approach is also tested for cross-source consistency (e.g., whether policy immaturity appears
simultaneously in faculty perceptions and institutional leader reports) (Digital Education Council, 2025;
Simunich et al., 2024).
Validity, Reliability, and Reproducibility: Validity was strengthened through convergence requirements:
principal claims were supported by at least two independent evidence classes (e.g., faculty surveys plus
institutional leader surveys; peer-reviewed studies plus policy frameworks) (Robert & McCormack, 2025). This
logic is consistent with triangulation approaches that treat convergence across independent data sources as a
means of strengthening interpretive validity and reducing overreliance on any single evidence stream (Carter et
Page 141
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
al., 2014). Reliability was addressed by privileged sources with explicit samples and stable reporting (official
statistics, peer-reviewed designs, or formal sector surveys).
Ethical Considerations: As secondary research, the study emphasizes responsible interpretation, avoidance of
performative techno-optimism, and attention to equity risks (e.g., potential harms of unreliable AI detection for
multilingual writers) (Liang et al., 2023).
Methodological Limitations and Mitigation: Sector surveys are not uniformly comparable across countries
and roles (faculty vs. chief online officers vs. IT leaders), and some high-quality proprietary reports (e.g.,
member-only publications) limit extractable detail. Mitigation relied on triangulating multiple open and
authoritative sources and focusing on robust directional patterns rather than overfitting fine-grained cross-survey
comparisons (Digital Education Council, 2025; Robert, 2024; Ruediger et al., 2024; Simunich et al., 2024). Table
1 maps the principal evidence sources onto the four EPIQ-AI readiness domains and clarifies how each source
informs the analytical framework used in the synthesis.
Table 1. Mapping Major Evidence Sources onto the EPIQ-AI Readiness Domains
Evidence
source
Epistemic readiness
Pedagogical
readiness
Institutional
readiness
Quality-and-
compliance readiness
Digital
Education
Council
(2025)
Uncertainty about
meaningful AI use
and the implications
for student
evaluation.
Use is widespread
but cautious;
adoption depth
remains uneven.
Major gaps are
reported in
guidelines, training,
best-practice
examples, and
enabling conditions.
Disclosure-based
permission regimes are
preferred over blanket
mandates or simple bans.
Ruediger et
al. (2024).
Experimentation
exceeds confidence,
indicating
unresolved judgment
about value and fit.
Defensive
restrictions reflect
weak confidence
in valid course-
level integration.
Training and policy
clarity are indirectly
implicated by
faculty uncertainty
and prohibition.
High prohibition rates
signal unresolved
integrity and assessment
concerns.
Scherer et al
(2021;
2023).
Readiness varies by
prior experience,
preparation, and self-
efficacy rather than
role alone.
TPACK-related
readiness is
heterogeneous,
with different
support needs
across faculty
profiles.
Institutions cannot
assume readiness
will accumulate
automatically with
time or seniority.
Not a primary focus of
these studies.
Robert and
McCormack
(2024)
Responsible
institutional AI use
requires critical
literacy rather than
simple tool access.
Best practices and
curated use cases
are core enablers
of adoption.
Governance,
operations,
infrastructure, and
pedagogy must be
aligned in policy
development.
Assessment and integrity
considerations should be
built into institutional
guidance.
Simunich et
al. (2024).
Strategic online
expansion increases
the need for sound
faculty judgment
under changing
conditions.
Scalable online
quality depends on
faculty buy-in and
continuing design
support.
Institution-wide
policy maturity
remains uneven, and
reported resource
sufficiency varies.
Weak support capacity
creates downstream risk
for online quality
assurance.
Page 142
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Xia et al.
(2024).
Integrity and
responsibility should
be cultivated through
redesigned
assessment practice.
Assessment should
shift toward
authentic, process-
rich, and self-
regulated learning
tasks.
Professional
development and
policy review are
necessary for
sustainable
implementation.
Assessment reform is
central to durable
academic integrity
protection.
OpenAI
(2023);
Elkhatat et
al. (2023).;
Liang et al.
(2023).
Judgment about
authorship and
originality cannot be
outsourced to
unreliable detectors.
Course design
should reduce
dependence on
product-only
evaluation and
detector-based
control.
Institutions need
due-process
safeguards before
acting on detector
outputs.
False positives and bias
make sole reliance on
detection unsuitable in
high-stakes settings.
Note. Cells summarize the principal contribution of each source to the four EPIQ-AI readiness domains. “Not a
primary focus” indicates that the source was not used in this study as a main basis for claims in that domain.
FINDINGS AND DISCUSSION
Cross-source Overview of Readiness Signals
Table 2 consolidates the highest-salience readiness signals drawn from the major secondary sources reviewed in
this study and provides the empirical bridge into the research-question-driven findings that follow.
Table 2. High-Salience Readiness Signals from Major Secondary Sources
Evidence source
Core readiness signal
Operational implication
Digital Education
Council Global AI
Faculty Survey
(2025)
61% report having used AI in
teaching; among non-users, top
barriers include lack of
time/resources (40%) and
uncertainty about how to use AI in
teaching (38%) (Digital Education
Council, 2025).
Readiness barriers are
primarily capacity and
pedagogical translation, not
mere access.
Digital Education
Council Global AI
Faculty Survey
(2025)
80% do not find institutional AI
guidelines comprehensive; only 6%
are fully satisfied with institutional
AI literacy resources; top enablers
emphasize access to tools (65%),
AI literacy training (64%), best-
practice collections (60%), and
clear guidelines (50%) (Digital
Education Council, 2025).
Governance and faculty
development are viewed as
prerequisites for scaled AI
integration.
Ithaka S+R national
instructor survey
analysis (2024)
72% experimented with GenAI as
an instructional tool, yet only 14%
report confidence using it
instructionally; 38% report little/no
confidence (Ruediger et al., 2024).
Adoption prevalence should
not be misread as
instructional readiness.
Ithaka S+R national
instructor survey
analysis (2024)
42% completely prohibit student
GenAI use; only 19% agree GenAI
benefits teaching in their field; 56%
remain uncertain (Ruediger et al.,
2024).
Policies and support must
address uncertainty and
disciplinary divergence.
Page 143
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
CHLOE 9 (2024)
chief online
learning officer
survey
Online priorities: 69% prioritize
online versions of on-campus
courses, 65% online versions of on-
campus degrees (Simunich et al.,
2024).
Online scale is an
institutional strategy,
increasing the stakes of
faculty readiness.
CHLOE 9 (2024)
Faculty autonomy is the most
frequently cited primary barrier to
online initiatives; lack of buy-in
and support staffing remain
prominent barriers (Simunich et al.,
2024).
Rollout must be negotiated
as shared governance, not
imposed as an IT project.
CHLOE 9 (2024)
AI policy maturity: 35% report
institution-wide AI
policies/guidelines for student use;
40% are discussing policies but
none published (Simunich et al.,
2024).
Institutional policy
formation lags behind
practical AI presence in
coursework.
NCES IPEDS (fall
2022)
54.2% of students took at least one
distance education course (National
Center for Education Statistics,
2025).
Policy-compliant, high-
quality online teaching is a
mass-scale requirement, not
niche expertise.
Note. Percentages are reproduced from the cited sources; samples and question wording vary by study.
Faculty Readiness, Confidence, and Adoption Depth
Evidence converges on a central pattern: readiness is uneven and adoption is not synonymous with preparedness.
Globally, a majority of faculty report AI use in teaching (61%), but the distribution of use intensity is cautious,
and non-users cite time/resource scarcity and uncertainty about instructional application as dominant barriers
(Digital Education Council, 2025). In the United States, instructors report substantial experimentation (72%),
yet confidence is strikingly low (14% confident), implying that “trial” often occurs without stable pedagogical
integration (Ruediger et al., 2024). This pattern is consistent with broader online teaching readiness research:
faculty are not homogeneous in readiness, and readiness profiles are shaped by prior experience and preparation
rather than position alone (Scherer et al., 2021).
A further implication is that experience does not linearly accumulate into readiness. Evidence indicates a
curvilinear relationship between online teaching experience and readiness (especially TPACK self-efficacy),
suggesting that both novice and experienced faculty may require different kinds of support (Scherer et al., 2023).
This challenges a common institutional assumption: that readiness gaps can be closed primarily by more time
using the tools,” rather than by targeted professional development and structural support.
The EPIQ model clarifies why readiness appears uneven. Technology acceptance logics predict that adoption
depends on perceptions of value and ease, but the survey evidence shows that faculty are often not stalled by
ideological hostility; rather, they report an absence of time, resources, and pedagogically meaningful guidance
(Digital Education Council, 2025). In TPACK terms, exposure to tools does not equal the situated knowledge
needed to integrate them into disciplinary teaching with valid assessment (Mishra & Koehler, 2006; Scherer et
al., 2023). Organizational readiness theory further implies that unevenness is expected when institutions vary in
change commitment and change efficacy as expressed in training capacity, policy clarity, and governance
arrangements (Weiner, 2009).
Page 144
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Institutional Supports and Readiness Gaps
Across evidence sources, five institutional supports repeatedly emerge as readiness enablers: (1) access to
appropriate AI tools and resources; (2) AI literacy and skills training; (3) curated best practices and use cases;
(4) clear guidelines for AI in teaching; and (5) an environment that encourages innovation and tolerates failure
(Digital Education Council, 2025; Robert & McCormack, 2024). The salience of guidelines is reinforced by the
inverse: 80% of faculty do not find institutional AI guidelines comprehensive, suggesting that policy ambiguity
functions as a readiness inhibitor (Digital Education Council, 2025).
Institutional leader evidence further demonstrates that governance maturity is still developing. CHLOE reports
that 35% of institutions have institution-wide AI policies, while 40% are still discussing but have not published
policies, implying that many faculty are operating in policy vacuums or fragmented departmental regimes
(Simunich et al., 2024). Complementarily, EDUCAUSE frames AI policy development as a multi-domain
institutional project spanning governance, operations (including professional development and infrastructure),
and pedagogy (including integrity and assessment) (Robert & McCormack, 2024).
Online expansion also increases the support burden: chief online officers report widespread strategic
prioritization of online course and degree conversions (Simunich et al., 2024). Yet they identify faculty
autonomy and buy-in as persistent barriers and tensions, meaning that even robust technical infrastructure will
not yield scalable online quality if faculty agency is not institutionally respected and operationally supported
(Simunich et al., 2024).
The evidence indicates that readiness is structurally constrained by the institutions support ecology. If policy
clarity and training are missing, faculty may rationally restrict student AI use (as reflected by high prohibition
rates) to preserve assessment credibility (Robert & McCormack, 2024; Ruediger et al., 2024). From an
organizational readiness lens, policy immaturity and insufficient resources reduce collective change efficacy,
lowering implementation quality even where change commitment exists among some faculty (Weiner, 2009).
Importantly, CHLOE’s reporting that resource sufficiency varies and that sizeable shares disagree that resources
are adequate underscores that “online scale” often advances faster than capacity-building for teaching and course
design support roles (e.g., instructional design and learning support staffing) (Simunich et al., 2024).
Assessment, Integrity, and Governance Responses
Assessment and academic integrity are the decisive readiness frontier (Digital Education Council, 2025; Xia et
al., 2024). Global faculty evidence shows that 54% believe student evaluation methods require either significant
changes soon (41%) or an urgent complete revamp (13%), indicating that marginal policy tweaks are unlikely
to suffice (Digital Education Council, 2025). Preferences also tilt toward structured permission regimes: faculty
most often prefer AI-permitted-with-disclosure-and-instructor-instructions approaches, rather than blanket bans
or mandatory AI use (Digital Education Council, 2025). In the U.S. instructor survey evidence, 42% completely
prohibit student GenAI use, and majorities remain uncertain about net benefits, reflecting epistemic and
evaluative insecurity rather than settled consensus (Ruediger et al., 2024).
Peer-reviewed synthesis strengthens these survey signals. A scoping review of GenAI and assessment (32
empirical studies) argues that higher education assessment should shift toward self-regulated learning,
responsibility, and integrity, recommending professional development in assessment, AI, and digital literacy,
alongside institutional policy review and interdisciplinary approaches (Xia et al., 2024). A systematic review on
AI and academic integrity similarly emphasizes a balanced approach that leverages AI benefits while sustaining
ethical standards and integrity culture (Balalle & Pannilage, 2025).
Crucially, detection-centric integrity strategies face empirical and ethical constraints. OpenAI’s own classifier
documentation reported meaningful false positives (9% on its challenge set) and noted that the tool was
discontinued due to low accuracy, undermining any presumption that detection can serve as a primary integrity
adjudication mechanism (OpenAI, 2023). Peer-reviewed research evaluating AI content detection tools
documents inconsistent performance and false positives on human-written controls, warning against sole reliance
in high-stakes contexts (Elkhatat et al., 2023). Further, an open-access analysis argues that GPT detectors can be
Page 145
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
biased against non-native English writers, raising equity and due-process concerns for multilingual and
international students (Liang et al., 2023).
Integrative Cross-Disciplinary Discussion
From a learning sciences perspective, GenAI collapses a long-standing assessment assumption: that presented
fluency plausibly indicates individual mastery (Xia et al., 2024). GenAI can produce plausible text at scale,
meaning assessment validity must increasingly depend on task design, process evidence, and demonstration of
judgment, rather than product-only evaluation (Xia et al., 2024). From organizational studies, the evidence
indicates that readiness is a distributed institutional capacity: online scale and AI integration require coordinated
governance, resourcing, and cultural alignment, as shown by the persistence of faculty autonomy and buy-in as
major barriers to online initiatives (Simunich et al., 2024). From ethics and policy, UNESCO’s framing of GenAI
as implicating human agency and the meaning of knowledge, coupled with risk-governance frameworks
emphasizing trustworthy AI, implies that readiness must include the capacity to deliberate about what should
not be automated in pedagogy and evaluation (National Institute of Standards and Technology, 2023; UNESCO,
2023).
The EPIQ framework thus interprets readiness unevenness” as a predictable consequence of misalignment:
faculty willingness to experiment coexists with institutional underinvestment in guidelines, training, and
assessment redesign capacity, producing uncertainty and prohibition as rational defensive responses (Digital
Education Council, 2025; Ruediger et al., 2024; Simunich et al., 2024). Figure 2 visualizes this synthesized
misalignment pathway.
Figure 2. Readiness Misalignment Pathway in AI-Supported Teaching and Online Program Delivery. An
analytical pathway showing how institutional expansion pressures, support deficits, uneven faculty response,
and assessment stress combine to create pressure for governance redesign. Arrows indicate escalating pressure
and interaction across stages, not deterministic causation.
Theoretical, Ethical, and Policy Implications
Theoretically, EPIQ-AI extends TAM/UTAUT by positing that perceived usefulness is insufficient in higher
education unless coupled with epistemic justification capacity (the ability to evaluate AI outputs, manage
hallucination risk, and design tasks that assess reasoned understanding) (Davis, 1989; Venkatesh et al., 2003). It
extends TPACK by treating GenAI not only as a teaching tool but as a knowledge-production technology that
alters the epistemic environment of assessment and scholarly writing (Mishra & Koehler, 2006). It also reframes
organizational readiness for change by specifying that “change efficacy” in higher education includes assessment
legitimacy and integrity protection mechanisms as core outputs (Weiner, 2009).
Ethically, the evidence supports shifting away from punitive, detection-first regimes toward transparent policy,
student AI literacy, and assessment redesign, partly because detection tools can be unreliable and discriminatory
Page 146
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
in effect, particularly for multilingual writers (Elkhatat et al., 2023; Liang et al., 2023; OpenAI, 2023). Policy-
wise, external governance regimes increasingly encode risk-based approaches to AI oversight. The EU AI Act
establishes a risk-based framework, specific transparency obligations for systems such as chatbots, and stricter
requirements for high-risk systems, underscoring that higher education institutions must treat AI governance as
a compliance-relevant domain rather than as a mere teaching tip sheet (European Commission, 2024; European
Parliament & Council on the European Union, 2024).
CONCLUSION AND RECOMMENDATIONS
Conclusion: Converging evidence indicates that faculty readiness for AI-supported teaching and scalable online
program delivery is uneven, characterized by widespread experimentation but limited confidence, significant
policy ambiguity, and acute pressure on assessment and integrity regimes (Digital Education Council, 2025;
Ruediger et al., 2024; Simunich et al., 2024; Xia et al., 2024). The EPIQ-AI framework explains this pattern by
treating readiness as an alignment problem across epistemic literacy, pedagogical competence, institutional
governance and resourcing, and quality-and-compliance obligations.
Theoretical and Scholarly Contribution: The EPIQ-AI framework advances scholarship by integrating three
classical traditions (technology acceptance, TPACK, and organizational readiness) into a distinctly higher-
education-specific readiness theory centered on epistemic integrity and assessment legitimacydimensions
increasingly foregrounded by global policy guidance on GenAI (Davis, 1989; Mishra & Koehler, 2006;
UNESCO, 2023; Venkatesh et al., 2003; Weiner, 2009).
Recommendations: For academic institutions and governance bodies, the evidence supports a staged readiness
strategy that begins with policy clarity and faculty capacity-building before ambitious AI-enabled or online
scale-up. Institutional leaders should establish institution-wide AI policies that explicitly address permitted uses,
disclosure norms, assessment design principles, data privacy, intellectual property, accessibility, and due process
in misconduct allegations, because both faculty and chief online officers report that policy formation is
incomplete and uneven (Robert & McCormack, 2024; Simunich et al., 2024; UNESCO, 2023; U.S. Department
of Education, Office of Educational Technology, 2023). For faculty development and teaching and learning
centers, professional learning should be redesigned around assessment transformation and epistemic evaluation
(how to assess reasoning and process in an AI-saturated environment), consistent with evidence that many
instructors lack confidence and that the research literature calls for professional development in assessment, AI,
and digital literacy (Ruediger et al., 2024; Xia et al., 2024). For private-sector partners and edtech providers,
procurement and integration should be conditioned on transparent model limitations, bias risk documentation,
and secure workflows, aligning institutional practice with risk management norms that emphasize trustworthy
AI and harm mitigation (National Institute of Standards and Technology, 2023). For philanthropic and civil
society organizations, support should prioritize open educational resources for AI literacy and assessment
redesign rather than surveillance tooling, given documented limitations of AI detection and bias risk (Elkhatat
et al., 2023; Liang et al., 2023; OpenAI, 2023).
Impact Assessment Framework: The EPIQ-AI model implies that institutions can operationalize readiness
through a dashboard of threshold indicators. This operationalization logic is also consistent with EDUCAUSE’s
Higher Education Generative AI Readiness Assessment, which frames AI preparedness as an institution-wide,
cross-functional evaluative process (EDUCAUSE, 2025). Table 3 translates the framework into an operational
dashboard linking each readiness domain to illustrative indicators and corresponding institutional action
priorities.
Table 3. Operational EPIQ-AI Readiness Dashboard
Domain
Illustrative indicators
Institutional action priority
Epistemic
readiness
Faculty completion of AI literacy modules on
limitations, bias, hallucination risk, and
responsible use; measured confidence shifts;
Establish structured literacy pathways,
applied workshops, and discipline-sensitive
guidance on evaluation of AI outputs.
Page 147
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
evidence of judgment about acceptable and
unacceptable AI-supported work.
Pedagogical
readiness
Adoption of AI-aware assessment patterns such
as authentic tasks, process artifacts, oral
defenses, iterative drafts, and reflective
disclosure; evidence of alignment between tool
use and learning outcomes.
Support assessment redesign, model valid
disciplinary use cases, and strengthen
instructional-design partnership for course
revision.
Institutional
readiness
Published institution-wide guidance;
availability of training and consultation;
instructional-design and learning-support
capacity; faculty workload recognition;
evidence of coherent governance across units.
Strengthen policy maturity, resource
support roles adequately, and align
incentives, governance, and faculty
autonomy in implementation.
Quality-and-
compliance
readiness
Demonstrated alignment with distance-
education expectations for regular and
substantive interaction; documented integrity
due-process workflows; reduced reliance on
detector-only decision making.
Embed compliance review, due-process
safeguards, and integrity procedures that
privilege evidence-rich adjudication over
detection alone.
Note. Indicators are illustrative operational measures implied by the EPIQ-AI model. Institutions should adapt
thresholds, evidence sources, and review cycles to local context, modality mix, and governance structure.
As shown in Table 3, epistemic readiness can be tracked via the proportion of faculty completing AI literacy
modules covering limitations, bias, and responsible use, paired with measured confidence shifts (not merely
attendance) (Ruediger et al., 2024; UNESCO, 2023; U.S. Department of Education, Office of Educational
Technology, 2023). Pedagogical readiness can be tracked through adoption of AI-aware assessment patterns
(e.g., authentic tasks, process artifacts, oral defenses, iterative drafts) aligned with scoping-review
recommendations and faculty calls for assessment transformation (Xia et al., 2024). Institutional readiness can
be assessed through policy maturity (published institution-wide guidance), support capacity (instructional design
staffing and training availability), and faculty workload recognition, consistent with reported barriers and
resource sufficiency variance (Digital Education Council, 2025; Robert & McCormack, 2024; Simunich et al.,
2024). Quality-and-compliance readiness should be assessed through demonstrated alignment with distance
education requirements for regular and substantive interaction in online courses (eCFR, n.d.; U.S. Department
of Education, 2020) and the presence of documented integrity due-process workflows that do not rely on
detection as a sole determinant (Elkhatat et al., 2023; Liang et al., 2023; OpenAI, 2023).
Study Limitations: The synthesis relies on secondary sources with different populations and instruments (global
faculty, U.S. instructors, U.S. chief online officers), limiting direct statistical comparability across estimates, and
it privileges open and authoritative sources rather than paywalled sector reports. These constraints were
mitigated through cross-source convergence checks and by focusing on robust directional findings that recur
across independent evidence classes (Digital Education Council, 2025; Robert, 2024; Ruediger et al., 2024;
Simunich et al., 2024).
Future Research Directions: Future research should move the EPIQ-AI framework from theoretically
grounded synthesis toward empirical validation. First, multi-site institutional case studies could examine how
the four readiness domains interact across different governance structures, disciplinary contexts, and online
delivery models. Second, researchers could develop and validate an EPIQ-AI survey instrument by generating
domain-specific items, establishing content validity, testing factor structure, and assessing reliability and
construct validity (Boateng et al., 2018; Shea et al., 2014). Third, longitudinal studies could track whether
interventions such as AI literacy training, assessment redesign support, and policy implementation produce
measurable changes in faculty readiness over time. Such work would clarify the framework’s explanatory power,
practical measurability, and value for institutional decision-making.
Page 148
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
Declarations
Conflict of interest. The authors declare that there are no financial, institutional, professional, or personal
relationships that could reasonably be perceived as having influenced the conception, evidence selection,
synthesis, interpretation, or writing of this manuscript.
Funding. This research received no external funding. The study was completed through the authors’ own
scholarly work, with only routine institutional support from their affiliated institution(s).
Ethical approval. This study is based exclusively on secondary sources, including peer-reviewed literature,
official datasets, institutional frameworks, and public policy and regulatory documents. It did not involve human
participants, interviews, surveys conducted by the authors, experiments, or access to identifiable personal data.
Formal ethical approval and informed consent were therefore not required.
Data availability. No original dataset was generated for this study. The materials supporting the analysis consist
of publicly accessible sources cited throughout the manuscript, including peer-reviewed publications, official
datasets, institutional reports, and policy or regulatory documents. All evidence used in the study can be traced
through the methodology section and the reference list.
Use of AI tools. During manuscript preparation, the author(s) used OpenAI’s GPT-5.4 Thinking model in a
limited support role for editorial and research-assistance tasks, including language refinement, structural
organization, and phrasing improvement. All substantive decisions concerning argument development, source
selection, verification, interpretation, and final revision were made by the author(s), who bear full responsibility
for the content of the manuscript.
REFERENCES
1. Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic
literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299.
https://doi.org/10.1016/j.ssaho.2025.101299
2. Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quonez, H. R., & Young, S. L. (2018). Best
practices for developing and validating scales for health, social, and behavioral research: A primer.
Frontiers in Public Health, 6, Article 149. https://doi.org/10.3389/fpubh.2018.00149
3. Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation
in qualitative research. Oncology Nursing Forum, 41(5), 545547. https://doi.org/10.1188/14.ONF.545-
547
4. Council of the European Union. (2024, May 21). Artificial intelligence (AI) act: Council gives final green
light to the first worldwide rules on AI (Press release 409/24).
https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-
council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/pdf/
5. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly, 13(3), 319340. https://doi.org/10.2307/249008
6. Digital Education Council. (2025). AI meets academia: What faculty think: Global AI Faculty Survey
2025. https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey
7. eCFR. (n.d.). 34 CFR § 600.2Definitions. https://www.ecfr.gov/current/title-34/subtitle-B/chapter-
VI/part-600/subpart-A/section-600.2
8. EDUCAUSE. (2025, March 10). Higher education generative AI readiness assessment.
https://library.educause.edu/resources/2024/4/higher-education-generative-ai-readiness-assessment
9. Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in
differentiating between human and AI-generated text. International Journal for Educational Integrity, 19,
Article 17. https://doi.org/10.1007/s40979-023-00140-5
Page 149
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
10. European Commission. (2024, August 1). European Artificial Intelligence Act comes into force (Press
release IP/24/4123).
https://ec.europa.eu/commission/presscorner/api/files/document/print/ov/ip_24_4123/IP_24_4123_EN.
pdf
11. European Commission. (2024, August 1). AI Act enters into force. https://commission.europa.eu/news-
and-media/news/ai-act-enters-force-2024-08-01_en
12. European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 of the
European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial
intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689.
http://data.europa.eu/eli/reg/2024/1689/oj
13. Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-
native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779
14. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for
teacher knowledge. Teachers College Record, 108(6), 10171054. https://doi.org/10.1111/j.1467-
9620.2006.00684.x
15. National Center for Education Statistics. (2025). Digest of education statistics, Table 311.15: Number
and percentage of students enrolled in degree-granting postsecondary institutions, by distance education
participation, location of student, level of enrollment, and control and level of institution: Fall 2022 and
fall 2023. https://nces.ed.gov/programs/digest/d25/tables/dt25_311.15.asp
16. National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management
Framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
17. OpenAI. (2023, July 20). New AI classifier for indicating AI-written text. https://openai.com/index/new-
ai-classifier-for-indicating-ai-written-text/
18. Robert, J. (2024). 2024 EDUCAUSE AI Landscape Study. EDUCAUSE.
https://library.educause.edu/resources/2024/2/2024-educause-ai-landscape-study
19. Robert, J., & McCormack, M. (2025, February 17). 2025 EDUCAUSE AI landscape study: Into the
digital AI divide. EDUCAUSE. https://library.educause.edu/resources/2025/2/2025-educause-ai-
landscape-study
20. Robert, J., & McCormack, M. (2024). 2024 EDUCAUSE Action Plan: AI Policies and Guidelines.
EDUCAUSE. https://library.educause.edu/resources/2024/5/2024-educause-action-plan-ai-policies-
and-guidelines
21. Ruediger, D., Blankstein, M., & Love, S. (2024). Generative AI and postsecondary instructional
practices: Findings from a national survey of instructors. Ithaka S+R.
https://sr.ithaka.org/publications/generative-ai-and-postsecondary-instructional-practices
22. Sangwa, D., Butera, A., & Mutabazi, P. (2025). Digital Transformation of Higher Education: A Post-
COVID Review of Adoption, Quality Assurance, and Governance Challenges. Current Research
Bulletin. https://doi.org/10.55677/CRB/I07-07-CRB2025
23. Scherer, R., Howard, S. K., Tondeur, J., & Siddiq, F. (2021). Profiling teachers’ readiness for online
teaching and learning in higher education: Who’s ready? Computers in Human Behavior, 118, 106675.
https://doi.org/10.1016/j.chb.2020.106675
24. Scherer, R., Siddiq, F., Howard, S. K., & Tondeur, J. (2023). The more experienced, the better prepared?
New evidence on the relation between teachers’ experience and their readiness for online teaching and
learning. Computers in Human Behavior, 139, 107530. https://doi.org/10.1016/j.chb.2022.107530
25. Shea, C. M., Jacobs, S. R., Esserman, D. A., Bruce, K., & Weiner, B. J. (2014). Organizational readiness
for implementing change: A psychometric assessment of a new measure. Implementation Science, 9,
Article 7. https://doi.org/10.1186/1748-5908-9-7
26. Simunich, B., Garrett, R., Fredericksen, E. E., McCormack, M., Robert, J., & Ubell, R. (2024). CHLOE
9: Strategy shift: Institutions respond to sustained online demand (The Changing Landscape of Online
Education, 2024). Quality Matters, Eduventures Research, & EDUCAUSE.
https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/QM-Eduventures-EDUCAUSE-
CHLOE%209-Report-2024.pdf
Page 150
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue IV, April 2026
27. Torraco, R. J. (2005). Writing integrative literature reviews: Guidelines and examples. Human Resource
Development Review, 4(3), 356367. https://doi.org/10.1177/1534484305278283
28. UNESCO. (2023). Guidance for generative AI in education and research.
https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
29. U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and the
future of teaching and learning: Insights and recommendations.
https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
30. U.S. Department of Education. (2020). Distance education and innovation. Federal Register, 85(171),
5474254818. https://fsapartners.ed.gov/sites/default/files/attachments/2020-09/FR090220_0.pdf
31. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information
technology: Toward a unified view. MIS Quarterly, 27(3), 425478. https://doi.org/10.2307/30036540
32. Weiner, B. J. (2009). A theory of organizational readiness for change. Implementation Science, 4, 67.
https://doi.org/10.1186/1748-5908-4-67
33. Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced
Nursing, 52(5), 546553. https://doi.org/10.1111/j.1365-2648.2005.03621.x
34. Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. F. (2024). A scoping review on how generative
artificial intelligence transforms assessment in higher education. International Journal of Educational
Technology in Higher Education, 21, Article 40. https://doi.org/10.1186/s41239-024-00468-z