Page 55
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Multi-Criteria Evaluation of AI-Based Adaptive Learning Platforms
in Global Higher Education: A Fuzzy AHP Perspective
Ram Singh
University of Lucknow
DOI: https://doi.org/10.51583/IJLTEMAS.2026.150300005
Received: 12 March 2026; Accepted: 17 March 2026; Published: 31 March 2026
ABSTRACT
The effectiveness of adaptive learning platforms in higher education is shaped by multiple interacting factors
that involve subjective judgment and uncertainty. This study employs the Fuzzy Analytic Hierarchy Process
(Fuzzy AHP) to examine the relative importance of key evaluation criteria and sub-criteria based on the
perceptions of higher education users with prior experience in technology-enabled learning systems. Data were
collected through a structured survey of 150 respondents using fuzzy pairwise comparisons. The evaluation
framework was structured around four main criteria—Technological Factors, Content Quality, User Factors, and
Institutional Support—supported by twelve sub-criteria.
The findings show a clear and consistent emphasis on learner-oriented and practical considerations. Within the
technological dimension, system usability is ranked as the most influential factor, surpassing platform reliability
and internet accessibility. For content quality, content relevance is prioritized over interactivity and multimedia
features, highlighting the importance of meaningful and well-structured instructional material. Among user-
related factors, learner motivation emerges as the dominant determinant, followed by digital literacy, while
engagement level carries comparatively less weight. In the institutional support category, technical support is
identified as the most critical element, reflecting the need for timely assistance to ensure uninterrupted platform
use.
Overall, the results indicate that effective adaptive learning platforms are driven primarily by ease of use, high-
quality instructional content, and motivated learners, supported by responsive technical services. The study
provides a structured evaluation framework that can assist educators, institutions, and system designers in
making informed decisions regarding the development and adoption of learning platforms in higher education
contexts.
Keywords: Adaptive learning platforms; Fuzzy AHP; multi-criteria decision making; higher education; e-
learning evaluation
INTRODUCTION
In the last decade, artificial intelligence (AI) has steadily shifted from an emerging technology to a central
catalyst for transforming higher education. Traditional instructional models, once defined by static curricula and
uniform pacing, are increasingly being challenged by data-driven innovations aimed at personalizing learning
experiences. Among these innovations, AI-based adaptive learning platforms have emerged as a promising
approach to tailor instructional content in real time according to individual learner needs, preferences,
performance, and engagement patterns. These platforms leverage machine learning, natural language processing,
and recommender-like systems to dynamically adjust pedagogical pathways, thereby promising more effective,
efficient, and inclusive learning outcomes for diverse student populations (Tan et al., 2025) Adaptive learning is
not a new concept in educational research, but the integration of AI significantly enhances its capability by
enabling continuous analysis of large volumes of learner interaction data and adjusting content delivery
responsively. Early studies in e-learning have documented that AI and machine learning algorithms can
personalize content sequencing, optimize resource recommendations, and improve overall learner engagement
Page 56
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
(Gligorea, 2023). Recent initiatives such as personalized adaptive programs in primary and secondary education
have already demonstrated meaningful gains. However, the adoption and evaluation of AI-based adaptive
learning platforms in higher education remain uneven and subject to contextual influences. Global comparisons
reveal that institutions in countries such as the United States, United Kingdom, Australia, China, and Singapore
are experimenting with diverse strategies for implementation, policy support, and technological investment, yet
these efforts vary widely in scope, scale, and success metrics (Pinela-Cárdenas et al., 2025). Moreover, research
on these systems often focuses narrowly on technical performance metrics or descriptive case studies, leaving a
gap in systematic frameworks for evaluating platforms based on multiple interrelated criteria that matter to
stakeholders such as faculty, administrators, instructional designers, and learners themselves.
A critical driver of this evaluation challenge is the multi-dimensional nature of platform performance and value.
Beyond measurable metrics like completion rates or test scores, educators and decision-makers must assess
factors such as personalization accuracy, pedagogical alignment, ethical use of data, technical integration,
scalability, user experience, cost, and institutional fit. These criteria are inherently subjective and often expressed
in linguistic terms that resist crisp numerical quantification. As a result, traditional decision-making methods
like simple weighted scoring or classical AHP (analytic hierarchy process) can fall short in capturing the
uncertainty and ambiguity inherent in stakeholder judgments. Multi-criteria decision analysis (MCDA) methods
have been increasingly used in educational research to address complex decision problems, but the need to
incorporate subjective judgments without forcing unwarranted precision calls for approaches that can model
fuzzy human reasoning (Alshakhatreh, 2024). The Fuzzy Analytic Hierarchy Process (Fuzzy AHP) has emerged
as a powerful extension of classical AHP precisely because it accommodates subjective judgments through fuzzy
scales. By allowing decision makers to express preferences with linguistic variables (such as “high,” “medium,
or “low”), Fuzzy AHP captures the vagueness and uncertainty typical of expert consensus in educational contexts
(Karnavas et al., 2025). Fuzzy AHP has already been applied successfully in several educational decision
problems for eg. evaluating e-learning platforms during the uncertainity as arises in COVID-19 pandemic (Xu,
2023). These studies demonstrate that fuzzy logic can produce more nuanced and robust weightings of evaluation
criteria, enabling decision makers to derive actionable insights even when precise measurement is elusive.
In the context of adaptive learning technologies, applying Fuzzy AHP becomes particularly valuable because the
trade-offs between competing criteria cannot be resolved by objective measures alone. For example, a platform
with outstanding personalization capability may be costly and difficult to integrate with existing infrastructure;
another platform might be more affordable but offer weaker adaptive features. Reconciling such trade-offs
requires a decision framework that can handle both quantitative and qualitative considerations and synthesize
expert judgments into coherent priorities. Fuzzy AHP helps structure this complexity by decomposing the
evaluation into a hierarchy of criteria and sub-criteria uniquely tailored to the policy and pedagogical concerns
of higher education institutions.
Several recent reviews document the rapid evolution and diverse applications of AI in adaptive education (
Holmes 2022; Williamson et al. 2020) highlighting both opportunities and challenges. For instance, systematic
reviews of AI-driven adaptive learning research identify ongoing concerns related to model interpretability,
ethical considerations such as bias and data privacy, and the need for culturally responsive and scalable designs
(Hariyanto, 2025). Likewise, inclusive AI research emphasizes the potential of adaptive platforms to support
learners with special needs by offering tailored interventions that promote engagement and understanding,
reducing educational gaps (Ayeni et al., 2025). These findings reinforce the urgency of developing robust
evaluation frameworks that simultaneously respect technological performance and educational values.
From a global perspective, higher education is also navigating the broader implications of AI adoption beyond
adaptive learning alone. Institutions such as Purdue University and Northeastern University are embedding AI
competency and partnerships with AI system providers into curricula and campus strategies, reflecting a
recognition that AI is reshaping not just tools but the core of higher education . In this environment, adaptive
learning platforms represent one of the most visible intersections of AI, pedagogy, and institutional strategy
and evaluating them systematically is essential for evidence-based decision making.
Page 57
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Yet few studies have integrated multi-criteria evaluation frameworks with fuzzy logic in the specific domain of
AI-based adaptive learning platforms in higher education. Most existing research either focuses on technology
development, algorithmic performance, or descriptive assessments of use cases. There remains a gap in literature
for systematic, comparative, and stakeholder-sensitive evaluation frameworks that are theoretically grounded,
methodologically rigorous, and practically relevant for institutional decision makers. This paper aims to fill that
gap by proposing a Fuzzy AHP-based multi-criteria evaluation model tailored to higher education contexts. The
framework is designed to prioritize critical criteria, capture expert uncertainty in linguistic judgments, and
produce a ranked assessment of alternative adaptive learning platforms that can inform institutional choices.
The present study advances the application of fuzzy multi-criteria decision analysis within the context of AI-
enhanced educational technologies by demonstrating how uncertainty and subjective expert judgment can be
systematically incorporated into the evaluation of adaptive learning platforms. It also offers a structured and
practical evaluation framework that higher education leaders, policymakers, and institutional decision makers
can employ to compare and select AI-based adaptive learning platforms in a transparent and methodologically
sound manner. Third, the study sheds light on the relative importance of pedagogical, technical, ethical, and
institutional factors that collectively shape the perceived value and effectiveness of adaptive learning systems.
By integrating expert input, linguistic weighting, and fuzzy logic aggregation, the proposed framework provides
a more comprehensive and realistic assessment of adaptive learning platforms than the evaluation approaches
commonly reported in existing literature.
The objectives of this study are threefold. First, it aims to identify and structure the key criteria and sub-criteria
relevant to the evaluation of AI-based adaptive learning platforms in the context of global higher education.
Second, the study seeks to determine the relative importance of these criteria by capturing expert judgments
under conditions of uncertainty using a fuzzy analytic hierarchy process. Third, it aims to apply the proposed
fuzzy AHP framework to systematically evaluate and rank alternative adaptive learning platforms, thereby
supporting informed and evidence-based decision making for higher education institutions.
The remainder of this paper is organized as follows. The next section reviews the relevant literature on AI-based
adaptive learning systems and multi-criteria decision-making methods, with particular emphasis on fuzzy AHP
applications in educational contexts. This is followed by a detailed description of the research methodology,
including the development of the evaluation hierarchy, expert selection, data collection process, and the fuzzy
AHP computation steps. Subsequently, the results of the analysis are presented and discussed, highlighting the
relative importance of evaluation criteria and the comparative performance of the selected platforms. Finally,
the paper concludes with key theoretical and practical implications, limitations of the study, and directions for
future research.
LITERATURE REVIEW
Technological factors (C1) constitute the core operational foundation of AI-based adaptive learning platforms
in higher education. Regardless of advances in adaptive algorithms or instructional design, platform
effectiveness ultimately depends on learners ability to access, navigate, and use the system without friction. In
adaptive environments, technical disruptions are particularly consequential because usability failures or system
interruptions can break personalized learning trajectories and undermine learners trust in the platform. System
usability, platform reliability, and internet accessibility therefore function as interdependent determinants of
whether adaptive learning technologies can deliver their intended pedagogical value.
System usability (SC11) refers to the degree to which learners can interact with the platform intuitively,
understand system feedback, and engage with adaptive features with minimal cognitive effort. High usability is
typically reflected in clear navigation structures, consistent interface design, readable layouts, and transparent
adaptive recommendations. Prior studies indicate that usable systems reduce extraneous cognitive load, allowing
learners to focus on learning tasks rather than technological management (Davis, 1989; Sun et al., 2008). In AI-
driven environments, usability also encompasses the intelligibility of adaptive decisions, as learners are more
likely to trust and accept personalization when the rationale behind content sequencing or feedback is apparent.
Platform reliability (SC12) complements usability by addressing the technical stability and dependability of the
Page 58
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
system, including uptime, response speed, data accuracy, and consistent functioning of adaptive mechanisms.
Reliable platforms ensure uninterrupted access to learning activities, assessments, and feedback, which is
particularly critical in time-constrained higher education contexts. Evidence suggests that technical instability
diminishes learner confidence and reduces engagement, even when instructional design is pedagogically sound
(Bond et al., 2020). Internet accessibility (SC13) further extends the technological dimension by accounting for
variations in connectivity quality and device availability. In diverse higher education settings, students access
to high-speed internet and advanced hardware is uneven. Platforms optimized for low-bandwidth conditions,
mobile use, and asynchronous participation are therefore better positioned to support inclusive and sustained
learning experiences (UNESCO, 2021). Together, these technological sub-criteria form the backbone upon
which higher-level pedagogical and institutional factors depend.
Content quality(C2) represents the pedagogical substance of AI-based adaptive learning platforms and plays a
decisive role in shaping learning outcomes. While technological infrastructure enables system functionality, it is
the quality of instructional content that determines whether learning experiences are meaningful, engaging, and
aligned with academic objectives. Content relevance, interactivity, and multimedia support collectively define
the extent to which adaptive platforms can translate personalization into effective learning. Further, Content
relevance (SC21) refers to the alignment of learning materials with course objectives, disciplinary standards, and
individual learner needs. Adaptive systems enhance relevance by dynamically adjusting content sequencing,
difficulty levels, and learning pathways based on learner performance and preferences. Research consistently
shows that relevant content improves engagement and learning efficiency by reducing redundancy and directing
attention toward areas requiring improvement (Pane et al., 2017). Interactivity (SC22) captures the degree to
which learners actively engage with content through quizzes, simulations, problem-solving tasks, and adaptive
exercises. Interactive environments promote deeper cognitive processing and have been shown to improve
comprehension and retention compared to passive content delivery (Dede, 2014). In adaptive platforms,
interactivity is further strengthened by the system’s ability to tailor task complexity and feedback in response to
learner progress. Multimedia support (SC23) refers to the integration of multiple instructional formats, including
text, video, audio, animations, and visual representations. When designed in line with cognitive principles,
multimedia content can enhance understanding and manage cognitive load by distributing information across
complementary channels (Mayer, 2020). Adaptive platforms can selectively deploy multimedia elements based
on learner data, reinforcing both personalization and instructional effectiveness. Collectively, these dimensions
position content quality as a dynamic, learner-centered construct within adaptive learning environments.
User factors (C3) encompass learner-related characteristics that shape how individuals interact with AI-based
adaptive learning platforms. These factors recognize that system effectiveness is influenced not only by
technological and content attributes but also by learnersmotivation, skills, and patterns of engagement. Learner
motivation (SC31) reflects the willingness to invest effort, persist through challenges, and pursue academic
goals. Adaptive platforms can support motivation by offering personalized challenges, timely feedback, and
visible progress indicators that enhance learners sense of competence and autonomy (Ryan & Deci, 2020).
Motivation is especially critical in online and self-directed learning contexts, where external regulation is limited.
Digital literacy (SC32) refers to learnersability to navigate digital environments, interpret system feedback, and
manage learning tasks effectively. Learners with higher digital literacy are more likely to benefit from adaptive
features, whereas limited skills can lead to frustration and disengagement (Hatlevik et al., 2018). Engagement
level (SC33) represents learners behavioral, cognitive, and emotional involvement in learning activities.
Adaptive platforms promote engagement by aligning instructional difficulty with learner ability and by
delivering responsive, interactive experiences. Sustained engagement has been consistently linked to academic
success and is widely used as an indicator of adaptive system effectiveness (Bond et al., 2020).
Institutional support (C4) reflects the organizational conditions that enable the successful adoption and sustained
use of AI-based adaptive learning platforms. Even highly advanced systems may fail to generate educational
value in the absence of adequate institutional backing which depends on Technical Support, Training availability,
Policy, and management support.
Technical support (SC41) involves providing timely assistance to resolve system issues encountered by learners
and instructors. Responsive support minimizes disruptions, reduces frustration, and reinforces confidence in
Page 59
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
platform use. Training availability (SC42) refers to structured programs that build userscapacity to understand
and apply adaptive features effectively. Such training reduces resistance to technological change and promotes
pedagogically informed use of AI-based systems (Al-Azawei et al., 2019). Policy and management support
(SC43) captures institutional commitment through strategic planning, resource allocation, and governance
structures. Clear policies related to platform adoption, data governance, and pedagogical integration create an
enabling environment in which adaptive learning initiatives can be scaled and sustained. Collectively,
institutional support acts as a critical enabler that amplifies the impact of technological, content-related, and
user-level factors within adaptive learning ecosystems.
RESEARCH METHODOLOGY
This study adopts a quantitative, multi-criteria decision-making approach using the Fuzzy Analytic Hierarchy
Process (Fuzzy AHP) to evaluate AI-based adaptive learning platforms from the perspective of higher education
students. A survey-based research design is employed to capture studentsperceptions and judgments, which are
inherently subjective and uncertain, making fuzzy logic an appropriate analytical tool.
Data Collection
Primary data were collected through a structured questionnaire administered to 150 higher education faculty
members enrolled in undergraduate and postgraduate programs across multiple disciplines. The sample size is
considered adequate for fuzzy AHP–based perception studies, as the method emphasizes the quality and
consistency of judgments rather than large-scale statistical generalization. Respondents were selected using a
purposive sampling technique to ensure that all participants had prior experience using at least one AI-based
adaptive learning platform (such as intelligent tutoring systems, adaptive LMS modules, or AI-driven learning
applications (Van (2011), This criterion ensured that respondents were capable of providing informed evaluations
rather than speculative opinions.
Development of Survey Instrument
The questionnaire was designed in two sections. The first section captured demographic and contextual
information, including level of study, field of specialization, and prior exposure to AI-based learning platforms.
The second section focused on pairwise comparisons of evaluation criteria using linguistic terms (e.g., equally
important,“moderately more important,“strongly more important”). These linguistic judgments were later
converted into triangular fuzzy numbers, following established fuzzy AHP conventions. Prior to full deployment,
the questionnaire was reviewed by academic experts and pilot-tested with a small group of students to ensure
clarity, relevance, and reliability of the items.
Data Collection Procedure
The survey was administered among the faculty members working in higher education institute to get insight
on the . Participation was voluntary, and respondents were informed about the academic purpose of the study.
Data collection was conducted over a from July 2025 to November 2025. Completed questionnaires were
screened for completeness and logical consistency before being included in the analysis. Inconsistent or
incomplete responses were excluded to maintain data quality.
Finally, 150 questionnaires were considered for further Analysis. The demographic profiles of the respondents
are shown in Table 1:
Table 1: Demographic Profile of the respondents
Variable
Category
Frequency (n)
Percentage (%)
Type of Institution
Government
70
46.7
Private
80
53.3
Gender
Male
95
63.3
Female
55
36.7
Page 60
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Age Group (Years)
Below 30
15
10.0
30–39
45
30.0
40–49
50
33.3
50–59
30
20.0
60 and above
10
6.7
Academic Rank
Assistant Professor
80
53.3
Associate Professor
40
26.7
Professor
20
13.3
Adjunct/Other
10
6.7
Highest Qualification
PhD
70
46.7
MPhil / Master’s
60
40.0
PG Diploma / Other
15
10.0
Bachelor’s
5
3.3
Teaching Experience
Less than 5 years
30
20.0
5–10 years
45
30.0
11–20 years
50
33.3
More than 20 years
25
16.7
Discipline
Sciences
40
26.7
Engineering & Technology
30
20.0
Humanities
25
16.7
Social Sciences
20
13.3
Management
20
13.3
Education
10
6.7
Others
5
3.3
Digital Literacy Level
High
60
40.0
Medium
65
43.3
Low
25
16.7
E-learning Training (Last 2 Years)
Yes
55
36.7
No
95
63.3
Employment Status
Full-time
130
86.7
Part-time / Adjunct
20
13.3
Source: AuthorsOwn Work
Further, the explanation of each of the criteria viz. Technological Factors (C1), Content Quality (C2), User
Factors (C3), and Institutional Support (C4) is deciphered in Table 2.
Table2: Criterion Taken for Fuzzy AHP Analysis.
Criterion
Description
Technological factors (C1)
They reflects the core operational foundation of AI-based adaptive
learning platforms in higher education.
Content quality(C2)
It represents the pedagogical substance of AI-based adaptive learning
platforms and plays a decisive role in shaping learning outcomes
Institutional support (C3)
It reflects the organizational conditions that enable the successful
adoption and sustained use of AI-based adaptive learning platforms
User Factor (C4)
The perceived effectiveness of the platform in enhancing understanding
of course material, improving academic performance, and supporting
learning objectives.
Source: Authorscompilation
Page 61
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Fuzzy AHP Analysis Framework and Hierarchical Model
This study employs the Fuzzy Analytic Hierarchy Process (FAHP) to evaluate the factors influencing the
effectiveness of e-learning platforms. FAHP is selected due to its capability to handle the uncertainty and
vagueness inherent in expert judgments. The decision problem is structured hierarchically into three levels. The
first level represents the overall goal, which is to determine the relative importance of factors affecting e-learning
platforms. The second level consists of four main criteria: Technological Factors (C1), Content Quality (C2),
User Factors (C3), and Institutional Support (C4). The third level comprises twelve sub-criteria associated with
these four criteria.
Linguistic Scale and Triangular Fuzzy Numbers
Expert judgments were collected using linguistic terms, which were transformed into triangular fuzzy numbers
(TFNs) to capture uncertainty.The linguistic variables and their corresponding triangular fuzzy numbers (TFNs)
used in this study are adopted from Chen (2000) and Kahraman et al. (2004), as shown in Table 3
Table 3: Linguistic scale and corresponding TFNs
Linguistic term
TFN (l, m, u)
Equal importance
(1, 1, 1)
Slightly more important
(1, 3, 5)
Moderately more important
(3, 5, 7)
Strongly more important
(5, 7, 9)
Very strongly more important
(7, 9, 9)
Reciprocal
(1/u, 1/m, 1/l)
Criteria-Level Analysis
Let
󰆻
󰇟

󰇠
denote the fuzzy pairwise comparison matrix at the criteria level, where each element

󰇛



󰇜
is a triangular fuzzy number (TFN) representing the relative importance of criterion
over criterion
. Here,

,

, and

correspond to the lower, most likely, and upper values of expert judgment, respectively. The
diagonal elements of the matrix are equal to
󰇛
󰇜
, indicating equal importance of a criterion with itself, while
the off-diagonal elements and their reciprocals capture asymmetric preferences between criteria under
uncertainty.
Based on expert evaluations and the predefined fuzzy linguistic scale, the fuzzy pairwise comparison matrix of
the four main criteria—Technological Factors (C1), Content Quality (C2), User Factors (C3), and Institutional
Support (C4)—is constructed as shown in Table 4.
Table4: Fuzzy pairwise comparison matrix of criteria
C1
C2
C3
C4
C1
(1,1,1)
(3,5,7)
(5,7,9)
(3,5,7)
C2
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
(3,5,7)
C3
(1/9,1/7,1/5)
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
C4
(1/7,1/5,1/3)
(1/7,1/5,1/3)
(1/7,1/5,1/3)
(1,1,1)
Page 62
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
The reciprocal fuzzy numbers ensure logical consistency within the matrix, such that



This fuzzy pairwise comparison matrix serves as the foundational input for the subsequent FAHP computations,
including fuzzy row summation, synthetic extent calculation, degree of possibility assessment, and normalization
of criteria weights.
Interpretation of Row Sum Computation Ci)
Step 1: Row Sum Computation Ci)
After constructing the fuzzy pairwise comparison matrix of criteria (Table 2), the first computational step
involves calculating the fuzzy row sum for each criterion. The fuzzy row sum represents the aggregated
importance of a given criterion relative to all other criteria under consideration.
Mathematically, the fuzzy row sum for criterion
is computed as:



where

󰇛



󰇜is a triangular fuzzy number. The summation is performed component-wise for the
lower, middle, and upper values.
The resulting fuzzy row sums for all criteria are presented in Table 5. These values summarize the cumulative
dominance of each criterion across all pairwise comparisons.
Table 5: Criterion
ΣCi (l, m, u)
C1
(12, 18, 24)
C2
(7.14, 11.20, 15.33)
C3
(4.25, 6.34, 8.53)
C4
(2.43, 1.60, 2.00)
The synthetic extent values
integrate each criterion’s fuzzy row sum with the total fuzzy importance of all
criteria. This normalization allows for meaningful comparison across criteria by accounting for their relative
proportions within the overall decision space.
The degree of possibility that
is greater than
is defined as:
󰇛
󰇜
 if
 if
󰇜󰇛
otherwise
Criteria with larger synthetic extent values possess a higher degree of dominance in the decision hierarchy. In
this study, C1 demonstrates the greatest synthetic extent, reinforcing its leading role, followed by C2, C3, and
C4. The decreasing trend in synthetic extent values confirms a clear prioritization structure among the criteria.
The degree of possibility analysis quantifies the likelihood that one criterion is more important than another. A
higher degree of possibility indicates stronger dominance. The results show that Technological Factors (C1) have
a high possibility degree over all other criteria, while Institutional Support (C4) exhibits the weakest dominance.
Page 63
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
This outcome reflects a consensus among experts that technical infrastructure and platform functionality
outweigh institutional and policy-related aspects in determining e-learning effectiveness.
The normalized criteria weights presented in Table 5 provide the final relative importance of each criterion.
The normalized weight vector is calculated as:

Table 6 : Final weights of criteria
Criterion
Weight
C1
0.47
C2
0.32
C3
0.15
C4
0.06
The fuzzy row sums in Table 6 indicate that Technological Factors (C1) possess the highest aggregated
importance, followed by Content Quality (C2), User Factors (C3), and Institutional Support (C4). This
hierarchy is further confirmed by the normalized criteria weights shown in Table 4, where C1 attains the highest
weight (0.47). These findings suggest that system-related and technological considerations play a dominant role
in influencing the effectiveness of e-learning platforms, while institutional and administrative factors exert
comparatively less influence
Sub Criterion analysis
All matrices are constructed using triangular fuzzy numbers (TFNs) based on expert judgments and the
predefined linguistic scale. Reciprocal values are used to maintain logical consistency.
Tables 7–10 present the fuzzy pairwise comparison matrices of sub-criteria under each main criterion. The
matrices were constructed using triangular fuzzy numbers to represent expert judgments, while reciprocal values
were applied to ensure consistency.”
Table 7: Fuzzy Pairwise Comparison Matrix for Technological Factors (C1)
SC11
SC12
SC13
SC11 – System usability
(1,1,1)
(3,5,7)
(5,7,9)
SC12 – Platform reliability
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
SC13 – Internet accessibility
(1/9,1/7,1/5)
(1/7,1/5,1/3)
(1,1,1)
Table 8: Fuzzy Pairwise Comparison Matrix for Content Quality (C2)
SC21
SC22
SC23
SC21 – Content relevance
(1,1,1)
(3,5,7)
(5,7,9)
SC22 – Interactivity
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
SC23 – Multimedia support
(1/9,1/7,1/5)
(1/7,1/5,1/3)
(1,1,1)
Table 9: Fuzzy Pairwise Comparison Matrix for User Factors (C3)
SC31
SC32
SC33
SC31 – Learner motivation
(1,1,1)
(3,5,7)
(5,7,9)
SC32 – Digital literacy
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
SC33 – Engagement level
(1/9,1/7,1/5)
(1/7,1/5,1/3)
(1,1,1)
Page 64
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
Table 10: Fuzzy Pairwise Comparison Matrix for Institutional Support (C4)
SC41
SC42
SC43
SC41 – Technical support
(1,1,1)
(3,5,7)
(5,7,9)
SC42 – Training availability
(1/7,1/5,1/3)
(1,1,1)
(3,5,7)
SC43 – Policy & management support
(1/9,1/7,1/5)
(1/7,1/5,1/3)
(1,1,1)
Further, Tables 11–14 present the degree of possibility values, local weights, and global weights of sub-criteria
under each criterion, obtained using Chang’s extent analysis method (Chang,1996).
Table 11: Degree of Possibility and Weights for Sub-Criteria under Technological Factors (C1)
Sub-criterion
Degree of Possibility (dᵢ)
Local Weight
Global Weight
SC11 – System usability
1.00
0.79
0.37
SC12 – Platform reliability
0.26
0.21
0.10
SC13 – Internet accessibility
0.00
0.00
0.00
Note: Global weights are computed using criterion weight

.
Table 12: Degree of Possibility and Weights for Sub-Criteria under Content Quality (C2)
Sub-criterion
Degree of Possibility (dᵢ)
Local Weight
Global Weight
SC21 – Content relevance
1.00
0.79
0.25
SC22 – Interactivity
0.26
0.21
0.07
SC23 – Multimedia support
0.00
0.00
0.00
Note: Global weights are computed using criterion weight

.
Table 13: Degree of Possibility and Weights for Sub-Criteria under User Factors (C3)
Sub-criterion
Degree of Possibility (dᵢ)
Local Weight
Global Weight
SC31 – Learner motivation
1.00
0.79
0.12
SC32 – Digital literacy
0.26
0.21
0.03
SC33 – Engagement level
0.00
0.00
0.00
Note: Global weights are computed using criterion weight

.
Table 14: Degree of Possibility and Weights for Sub-Criteria under Institutional Support (C4)
Sub-criterion
Degree of Possibility (dᵢ)
Local Weight
Global Weight
SC41 – Technical support
1.00
0.79
0.047
SC42 – Training availability
0.26
0.21
0.013
SC43 – Policy & management support
0.00
0.00
0.00
Note: Global weights are computed using criterion weight

.
FINDINGS AND DISCUSSION
Within the Technological Factors category, the comparison matrix (Table 11) indicates that System Usability
(SC11) is consistently judged to be more important than both Platform Reliability (SC12) and Internet
Accessibility (SC13). The strong fuzzy dominance of SC11 over SC13 reflects the expertsemphasis on intuitive
interface design and ease of navigation as critical determinants of effective e-learning platforms. Platform
Reliability (SC12) is considered moderately more important than Internet Accessibility, highlighting the
necessity of stable system performance once basic connectivity requirements are met. Overall, the matrix
Page 65
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
suggests that usability and reliability are prioritized over infrastructure-related concerns within the technological
dimension.
Within the Content Quality category, the fuzzy comparison matrix for Content Quality (Table 12) demonstrates
a clear preference for Content Relevance (SC21) over both Interactivity (SC22) and Multimedia Support (SC23).
Experts perceive meaningful, well-structured content as the foundational element of effective e-learning, while
interactive features are viewed as supportive but secondary. Multimedia support receives the lowest relative
importance, indicating that technological enhancements are valued only when they contribute directly to content
clarity and learning outcomes. This pattern underscores the primacy of pedagogical value over presentation
features.
As shown in Table 13, under User Factors (C3), Learner Motivation (SC31) dominates the user-related sub-
criteria, reflecting its central role in sustaining engagement and learning persistence in online environments.
Digital Literacy (SC32) is considered moderately important, suggesting that basic technical competence is
necessary but not sufficient to ensure successful learning. Engagement Level (SC33) receives comparatively
lower importance, implying that engagement is often a consequence of motivation and platform design rather
than an independent driver.
The comparison matrix for Institutional Support (Table 14), indicates that Technical Support (SC41) is the most
influential sub-criterion within this category. This highlights the importance of timely assistance and
troubleshooting services in maintaining uninterrupted access to e-learning platforms. Training Availability
(SC42) is viewed as moderately important, reflecting its role in facilitating effective system use. Policy and
Management Support (SC43) exhibits the lowest relative importance, suggesting that strategic and
administrative backing, while necessary, has a less direct impact on learners immediate experiences.
Thus, it can be seen that across all criteria, the sub-criteria comparison matrices reveal a consistent pattern:
practical, learner-centric factors such as usability, content relevance, and learner motivation are prioritized over
structural or administrative considerations. The dominance relationships observed in the fuzzy matrices align
with the global weight results, reinforcing the conclusion that effective e-learning platforms depend primarily
on usable technology and high-quality instructional content, supported by motivated users and responsive
technical assistance.
Academic and managerial implications
This study advances the e-learning literature by offering a structured fuzzy AHP–based prioritization of criteria
and sub-criteria that govern platform effectiveness under decision uncertainty. By integrating triangular fuzzy
numbers with Chang’s extent analysis, the proposed framework overcomes the limitations of conventional AHP
models that assume precise judgments, thereby providing a more realistic representation of expert cognition.
The findings empirically reinforce the dominance of technological usability, content relevance, and learner
motivation, contributing quantitative evidence to ongoing theoretical debates on technology-driven learning
effectiveness. Moreover, the hierarchical decomposition and global weighting of sub-criteria extend existing
conceptual models by revealing how micro-level factors collectively shape macro-level platform performance.
As such, the study offers a replicable multi-criteria decision-making (MCDM) framework that future researchers
can adapt to different educational contexts, emerging digital learning technologies, or hybrid learning
environments.
From a practical perspective, the results provide actionable insights for e-learning designers, institutional
decision-makers, and policy planners seeking to optimize digital learning systems. The high global weights
assigned to system usability and content relevance indicate that investments should prioritize intuitive interface
design and pedagogically aligned instructional materials over peripheral technological features. Platform
developers can use the derived weight rankings to guide resource allocation, feature prioritization, and system
upgrades, while educational institutions may leverage the results to inform procurement and evaluation of e-
learning solutions. Furthermore, the comparatively lower influence of institutional policy factors suggests that
operational support and learner-centric design yield more immediate benefits than administrative interventions
Page 66
www.rsisinternational.org
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue III, March 2026
alone. Overall, the proposed FAHP framework serves as a decision-support tool that enables stakeholders to
align strategic planning with empirically grounded priority structures, thereby enhancing the effectiveness and
sustainability of e-learning platforms.
REFERENCES
1. Ayeni, A. O., Ovbiye, R. E., Onayemi, A. S., & Ojedele, K. E. (2025). AI-driven adaptive learning
platforms: Enhancing educational outcomes. World Journal of Advanced Research and Reviews.
ResearchGate
2. Bond, M., Bedenlier, S., Marín, V. I., & Händel, M. (2020). Emergency remote teaching in higher
education: Mapping the first global online semester. International Journal of Educational Technology in
Higher Education, 17(1), 1–24.
3. Chang, D. Y. (1996). Applications of the extent analysis method on fuzzy AHP. European Journal of
Operational Research, 95(3), 649–655.
4. Chen, C.T. (2000). Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy
Sets and Systems, 114(1), 19. https://doi.org/10.1016/S0165-0114(97)00377-1
5. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly, 13(3), 319–340.
6. Dede, C. (2014). Digital tools for deeper learning. Educational Leadership, 72(4), 30–38.
7. Gligorea, I. (2023). Adaptive Learning Using Artificial Intelligence in e-Learning. Education Sciences.
MDPI
8. Hariyanto, (2025). Artificial intelligence in adaptive education: a systematic review. Springer.
SpringerLink
9. Hatlevik, O. E., Gudmundsdottir, G. B., & Loi, M. (2018). Digital diversity among upper secondary
students. Computers & Education, 120, 137–149.
10. Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promise and implications
for teaching and learning. Center for Curriculum Redesign.
11. Karnavas, S. I., Peteinatos, I., Kyriazis, A., & Barbounaki, S. G. (2025). Fuzzy Multi-Criteria Decision-
Making as a Human-Centered AI Approach. Information.
12. Kahraman, C., Cebeci, U., & Ruan, D. (2004). Multi-attribute comparison of catering service companies
using fuzzy AHP: The case of Turkey. International Journal of Production Economics, 87(2), 171184.
https://doi.org/10.1016/S0925-5273(03)00099-9
13. AI Azawei, A., Serenelli, F., & Lundqvist, K. (2019). Universal design for learning (UDL): A content
analysis of peer-reviewed journal papers from 2012 to 2018. Journal of the Scholarship of Teaching and
Learning, 19(3), 1–25.
14. Pane, J. F., Steiner, E. D., Baird, M. D., & Hamilton, L. S. (2017). Informing progress: Insights on
personalized learning implementation and effects. Educational Evaluation and Policy Analysis, 39(2),
165–189.
15. Pinela-Cárdenas, R. A., Echeverría-Vásquez, H. E., Peralta-Gamboa, D. A., et al. (2025). Exploring AI-
powered adaptive learning systems and their implementation in educational settings. International Journal
of Innovative Research and Scientific Studies. IJIRSS
16. Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation: Classic definitions and new
directions. Academic Press.
17. Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning?
Computers & Education, 50(4), 1183–1202.
18. Tan, L. Y., Hu, S., Yeo, D. J., & Cheong, K. H. (2025). Artificial intelligence-enabled adaptive learning
platforms: A review. Computers & Education: AI. ScienceDirect
19. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other
tutoring systems. Educational Psychologist, 46(4), 197–221.
20. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in
education. Learning, Media and Technology, 45(3), 223–235.
21. Xu, S. (2023). Multi-criteria decision making for determining best teaching methods using Fuzzy AHP.
Soft Computing. ACM Digital Library