INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Decision-Based Grading Model System and Student Performance  
Analysis Using Rule-Based Algorithm  
1 Angela L. Arago, 2 Mamerto C. Mendoza, 3 Isagani Mirador Tano, 4 Keno Piad, 5 Jovy Jay D. Cabrera,  
6 Jonilo Mababa, 7 Criselle J. Centeno, 8 Jayson Victoriano  
1,2,6 Graduate School Department, La Consolacion University, Bulihan, City of Malolos, Bulacan,  
Philippines  
3 Graduate School Department, Quezon City University, Novaliches San Bartolome, Quezon City,  
Philippines  
4 College of Information and Communications Technology Bulacan State University, Malolos, Bulacan,  
Philippines  
5 Immaculate Conception I College of Arts and Technology Santa Maria Bulacan Philippines  
7 Information Technology Department, Pamantasan ng Lungsod ng Maynila, Intramuros, Manila,  
Philippines  
8 Information Technology Department, Bulacan State University, Malolos, Bulacan, Philippines  
Received: 07 January 2026; Accepted: 12 January 2026; Published: 16 January 2026  
ABSTRACT  
The continuous advancement of educational technologies has led to the development of innovative academic  
tools aimed at enhancing assessment methods and student performance analysis. This study introduces the  
Decision-Based Grading Model System and Student Performance Analysis Using Rule-Based Algorithm, a  
system designed to modernize the grading process and provide tailored academic support. The system features  
a flexible grading simulator that allows educators to set minimum passing scores based on predefined parameters  
such as course requirements, learning outcomes, and institutional policies. It also integrates a rule-based  
recommendation system that suggests appropriate learning materials and assessments for students who require  
remediation. The study utilized both qualitative and quantitative approaches, involving expert validation, user  
feedback, and system evaluation through the ISO/IEC 25010 Software Quality Model. Results show high levels  
of effectiveness in functionality, performance efficiency, usability, reliability, security, maintainability, and  
portability. Additionally, accuracy metrics revealed 80% precision, 89% recall, and an F1-score of 84% for the  
recommendation system, confirming its capacity to deliver relevant interventions. The system promotes  
academic transparency, reduces manual workload, and aligns grading and assessment strategies with actual  
student needs. Overall, the study contributes to the evolving landscape of educational technology by offering a  
dynamic, data-driven approach to academic management.  
Keywords: Automated Assessment, Decision-Based Grading, Educational Technology, Rule-Based Algorithm  
and Student Performance Analysis  
INTRODUCTION  
In modern education, technological innovations continue to reshape the way academic performance is assessed  
and monitored. The growing trend in higher education emphasizes the integration of intelligent systems capable  
of providing data-driven insights and real-time feedback to enhance both teaching and learning outcomes. With  
the advancement of artificial intelligence, automation, and rule-based algorithms, educational institutions are  
increasingly adopting systems that support outcome-based learning and personalized academic guidance. These  
Page 1507  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
developments align with the global movement toward digital transformation in education, where automation not  
only improves administrative efficiency but also strengthens pedagogical strategies by providing learners with  
transparent and actionable information about their academic progress. In this context, automated grading systems  
and analytical tools have become essential instruments in modern classrooms, offering streamlined, accurate,  
and objective performance evaluations that contribute to the overall improvement of educational quality.  
Despite the ongoing integration of technology in the education sector, significant issues persist in current grading  
models and performance evaluation systems. Traditional and semi-automated grading systems primarily focus  
on computing results rather than guiding students through a continuous and personalized learning process.  
Existing models often fail to provide students with actionable insights into how they can improve their standing  
or achieve specific target grades. Many systems lack predictive analysis capabilities and do not integrate  
intelligent algorithms that could simulate different academic outcomes based on varying performance scenarios.  
Additionally, the absence of theoretical alignment with educational frameworks such as Bloom’s Taxonomy,  
Constructivism, and Outcome-Based Education limits the pedagogical effectiveness of these systems. In the case  
of Quezon City University, for example, the current grading process still involves manual encoding of grades  
into a Management Information System template and submission through flash drives a method prone to human  
error, data inconsistency, and delays in grade reporting. These limitations underscore the pressing need for a  
grading model that not only automates calculations but also provides decision-based insights for students and  
educators alike.  
To address these challenges, this study aims to develop a Decision-Based Grading Model System and Student  
Performance Analysis Using Rule-Based Algorithm, which integrates automation with intelligent decision-  
making mechanisms. The primary objective of this study is to design a system that can simulate academic  
outcomes, analyze student performance trends, and recommend specific actions that lead to academic success.  
The system utilizes a rule-based algorithm to evaluate assessment weights, participation metrics, and  
examination results, providing real-time feedback and performance simulations.  
By employing a decision-based approach, the proposed model goes beyond traditional computation by  
identifying the minimum required scores students need to achieve in future assessments to meet academic targets.  
The system is further grounded in educational theories, incorporating elements of Bloom’s Taxonomy to assess  
cognitive performance levels, Constructivism to promote self-regulated learning, and Outcome-Based Education  
to ensure alignment with course objectives and expected learning outcomes. Through these components, the  
study aims to create a data-informed educational tool that enhances both the accuracy of grading and the quality  
of academic feedback.  
The contribution of this study lies in its innovative approach to bridge the gap between traditional grading  
systems and intelligent educational analytics. Unlike conventional grading tools that merely record outcomes,  
the proposed Decision-Based Grading Model System transforms grading into a predictive and prescriptive  
process empowering both students and instructors to make informed academic decisions. For students, the  
system provides a personalized roadmap toward achieving their desired performance goals, reducing uncertainty  
and academic stress.  
For educators, it delivers automated and consistent evaluations that minimize errors and enhance instructional  
feedback. On an institutional level, the system contributes to improved efficiency, transparency, and  
accountability in academic management. By integrating rule-based algorithms within a decision-support  
framework, this study not only addresses the inefficiencies of manual grading systems but also introduces a  
novel pedagogical model that connects technological innovation with evidence-based educational improvement.  
Ultimately, the Decision-Based Grading Model System and Student Performance Analysis Using Rule-Based  
Algorithm aspires to redefine how academic success is measured, predicted, and achieved in the modern digital  
learning environment.  
Page 1508  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Significance of the Study  
The development and implementation of a Decision-Based Grading Model System and Student Performance  
Analysis Using Rule-Based Algorithm have several benefits for various stakeholders in education, both within  
and outside QCU. These stakeholders include faculty members, students, educational institutions, researchers,  
and the broader academic community. Specifically, the study is helpful to the following benefactors:  
Faculty/Professors. This allows educators to focus more on teaching and less on administrative tasks. The  
system's advanced data analytics enable faculty to effectively monitor student performance, providing a valuable  
tool for identifying trends and adjusting teaching methods to meet student needs.  
Students. The main beneficiary of the system is the student. The literature surrounding decision-based grading  
models, optimization algorithms, and rule-based recommendation systems suggests that such technologies hold  
considerable promises for improving educational outcomes. By providing students with real-time predictions of  
the scores they need to pass and offering targeted recommendations for improvement, this system aims to  
empower students to take control of their academic success.  
Educational Institutions. Institutions experience administrative efficiencies and improved decision-making  
capabilities when they adopt such a system. Streamlining grade reporting and performance monitoring reduces  
operational costs and allows for the reallocation of resources to areas that directly impact student learning and  
institutional growth. The modernization of administrative processes through technology also enhances the  
institution's reputation, attracting prospective students and talented faculty members.  
Researchers and the Academe. The data generated by this system benefits not only researchers and the academic  
community but also other stakeholders. Educational researchers can access detailed and reliable data on student  
grades, allowing them to study and explore educational trends. This data is invaluable for testing and refining  
educational theories and practices. The system’s implementation and data analytics algorithms also provide  
fertile ground for further research, including algorithmic development and interdisciplinary studies combining  
educational theory, data science, and information technology.  
Policymakers, technological innovators, and parents. Policymakers can use the insights provided by the system  
to develop more informed educational policies that address equity, quality, and performance across schools.  
Technological innovators can learn from the challenges and solutions identified during the system's development  
to create similar systems in other domains. Parents and guardians gain a clearer view of their children's academic  
progress, which can enhance their involvement and support in their educational journeys.  
Scope and Delimitation  
This study focuses on the development and evaluation of a Decision-Based Grading Model System and Student  
Performance Analysis Using a Rule-Based Algorithm. The primary objective is to design a web-based system  
that enables instructors to efficiently upload and manage student grades, which are automatically integrated into  
an existing Management Information System (MIS) template. The system will utilize contemporary web  
technologies to ensure compatibility with the digital infrastructure commonly used in higher education  
institutions. It will be optimized for performance efficiency, allowing multiple users to access the platform  
simultaneously without delays, particularly during high-demand periods such as the end of academic terms.  
The system aims to provide both instructors and administrators with automated analytical insights into student  
performance, enabling early identification of at-risk students and supporting data-driven academic interventions.  
The study also includes the integration of a data analytics component, powered by a rule-based algorithm, to  
assess and summarize student performance data. The system’s overall quality and effectiveness will be evaluated  
according to the ISO/IEC 25010 software quality model, focusing on key attributes such as functionality,  
performance efficiency, usability, reliability, security, maintainability, and portability.  
The scope of this study extends to select academic institutions within Metro Manila and nearby areas, where  
respondents such as IT experts, faculty members, and school administrators can be easily accessed. These  
Page 1509  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
participants are chosen based on their relevance to system evaluation of IT experts for technical validation, and  
administrators or faculty for practical and usability assessment. The system’s deployment and testing will  
primarily occur in institutional environments that share similar infrastructures and operational frameworks,  
allowing for realistic and context-appropriate evaluation.  
However, the study also recognizes several delimitations and limitations. Since purposive sampling is employed  
and the sample size is relatively small, the findings may not be statistically generalizable to all educational  
institutions beyond the selected study sites. The data used for performance analysis depends on the accuracy and  
completeness of institutional records, which may introduce data bias or inconsistencies affecting algorithmic  
predictions. Additionally, variations in institutional infrastructure, policies, and user adaptability may influence  
the system’s effectiveness and acceptance in other contexts. The study also acknowledges potential algorithmic  
bias, which may affect the accuracy of performance analytics, as well as challenges related to data privacy and  
information security.  
METHODOLOGY  
To guide the system development, the researcher employed the Agile methodology, which allows iterative  
design, continuous testing, and rapid integration of improvements based on quantitative evaluation metrics. This  
approach ensures that the system is built efficiently, meets predefined functional and technical requirements, and  
is optimized for real-world usability and performance. The combination of developmental and analytical  
techniques provides a structured framework to create a data-driven system capable of supporting academic  
grading, performance analysis, and decision-making in educational settings.  
Descriptive Analytics  
Descriptive analytics aims to provide a summary of historical data to understand patterns and trends. In this  
study, it involves analyzing student grades and performance metrics to generate insightful reports.  
1. Business Understanding:  
Define the objectives for data analysis, specifically focusing on summarizing student performance.  
2. Data Understanding:  
Collect data from various sources, including grade reports, attendance records, and assignment submissions.  
Explore the data to understand its structure and quality.  
3. Data Preparation:  
Clean and preprocess the data to ensure it is ready for analysis.  
Transform and normalize the data as necessary.  
4. Modeling:  
Utilize statistical methods and data visualization techniques to describe the data. ∙ Generate reports and  
dashboards that provide insights into student performance.  
5. Evaluation:  
Assess the quality and usefulness of the descriptive models.  
Validate the results with stakeholders to ensure they align with the intended objectives.  
Page 1510  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
6. Deployment:  
Integrate descriptive analytics into the grading system for continuous monitoring and reporting.  
Optimization Algorithms  
It is the central for helping students meet specific academic goals by calculating the minimum required  
performance in assessments and guiding them toward the most efficient paths for success. These algorithms are  
generally used to find the best solution under given constraints, such as grades, time, and available resources.  
Rule-Based Algorithms  
It relies on a set of predefined rules to make decisions based on inputs. These systems are highly transparent and  
can provide actionable insights in a predictable and explainable manner, which is crucial in educational settings  
where students need clarity on why certain recommendations are made.  
Integration of Optimization and Rule-Based Algorithms  
In upcoming assignment of optimization algorithms and rule-based decision systems will enable a dynamic and  
adaptive decision-support system that not only predicts and recommends but also adjusts its suggestions based  
on the evolving performance of the student The optimization algorithm might determine that a student needs to  
score 80% on an upcoming assignment to pass the course, while the rule-based system might recommend  
additional practice problems in weak areas, personalized study resources, and even time management strategies.  
ISO/IEC 25010 or Software Quality Model  
Figure 1. ISO/IEC 25010 or Software Quality Model  
Figure 1 illustrates the ISO/IEC 25010 Software Product Quality Model, which was utilized in this study to  
evaluate the developed system. The model establishes an internationally accepted framework for defining  
software quality using eight important characteristics: functional suitability, performance efficiency,  
compatibility, usability, reliability, security, maintainability, and portability. These characteristics were used to  
determine how well the system meets user and technical requirements, such as its ability to deliver correct and  
complete functions, perform efficiently with available resources, operate reliably under specified conditions,  
protect data and information, interact with other systems, adapt to changes, and function across multiple  
environments. The research uses the ISO/IEC 25010 quality model to ensure a systematic, objective, and  
thorough assessment of the system's overall quality and effectiveness.  
Data Preprocessing Methods  
In this study, Proper preprocessing supports the Rule-Based Algorithm, enabling accurate, reproducible, and  
data-driven grading decisions. Since the system heavily relies on real-time academic data, ensuring the accuracy  
and completeness of these inputs is vital for producing valid grading outcomes and reliable performance  
analyses.  
Page 1511  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
1. Data Input  
The first stage involves collecting essential student data such as examination scores, attendance records, and  
activity performance. These raw datasets form the foundation of the system, capturing a comprehensive picture  
of each student’s academic performance.  
2. Data Validation and Cleaning  
The first stage involves collecting essential student data such as examination scores, attendance records, and  
activity performance. These raw datasets form the foundation of the system, capturing a comprehensive picture  
of each student’s academic performance:  
Handling Missing Values: Missing scores are imputed using mean or mode values, depending on the data  
type, while records with extensive missing data (greater than 20%) are excluded from computation.  
Outlier Detection and Correction: Extreme or implausible values are identified using F1-score analysis  
and cross-validated with university records to maintain grading accuracy.  
Duplicate Removal: Repeated entries are eliminated to prevent bias in computed results and ensure  
fairness across grading outputs.  
Data Consistency Checks: Cross-referencing is conducted between system logs and original academic  
records to ensure alignment of all entries, reinforcing data integrity and minimizing computation errors.  
3. Data Standardization  
To ensure comparability across various academic metrics:  
Continuous variables (exam scores, assignments) are normalized to a 0100 scale.  
Categorical variables (letter grades, course type) are numerically encoded for rule-based computation.  
Survey responses for system evaluation are converted into item-level variances, which support  
Cronbach’s Alpha reliability analysis.  
4. Integration with Rule-Based Grading System  
Following preprocessing, the cleaned datasets are integrated into the Rule-Based Algorithm Engine, where the  
decision-based grading workflow takes place:  
Apply Grading Rules: The engine evaluates ifthen grading rules sequentially to match student  
performance data with institutional standards.  
Assign Grade Based on Rule: Once the appropriate condition is satisfied, a grade is computed and stored.  
Generate Performance Analysis: The system interprets results to produce insights on strengths,  
weaknesses, and overall academic progress.  
Output Recommendations: Actionable feedback is generated, including remedial suggestions,  
recognition of top performers, and alerts for instructors. The preprocessing steps feed directly into this  
workflow, ensuring that each stage from rule evaluation to performance interpretation that uses reliable  
and standardized input data.  
Page 1512  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
5. Rule Derivation and Attribute Basis  
The rules embedded in the system’s algorithm are derived through a combined approach of expert knowledge  
and data-driven analysis. Expert faculty members and academic coordinators provided domain-specific insights  
and defined grading thresholds, which served as the foundation for the algorithm’s logic. These expert-defined  
rules were further validated and refined using historical student data collected from previous academic terms to  
ensure alignment with real-world performance trends.  
The key attributes considered in this process include examination scores, attendance records, class participation,  
project evaluations, and overall performance metrics. These attributes reflect the actual criteria used by  
instructors during manual grading and serve as measurable indicators for automated decision-making within the  
Rule-Based Algorithm Engine.  
6. Innovation and Real-World Validation  
The preprocessing methods support the innovative, decision-based adaptive grading approach, allowing the  
system to handle diverse student records effectively. The methodology was validated through a university pilot  
study, demonstrating that preprocessing ensures accurate grading outcomes, reliable performance analysis, and  
reproducible survey evaluations.  
Respondents of the Study  
The figure 2 below indicates the group of respondents who will assess the development system. The group of  
respondents consists of IT Experts, Local Government Unit and selected citizens.  
In selecting respondents for the study, criteria may include the following:  
Figure 2. Respondents of the Study  
It presents the identified respondents of the study and is composed of three main groups which are school  
administrators, IT experts, and faculty members, with a total of one hundred ten (30) participants. Each group  
played an essential role in providing comprehensive insights that contributed to the development, validation, and  
evaluation of the proposed Decision-Based Grading Model System and Student Performance Analysis Using  
Rule-Based Algorithm. Their perspectives collectively ensured that the system was designed to address both the  
academic and technical aspects of student evaluation and performance analysis.  
Ethical Considerations  
This study strictly adheres to ethical standards throughout the data collection, analysis, and reporting processes.  
The researcher ensures that the rights, privacy, and welfare of all participants are protected. Specifically, the  
principles of confidentiality, anonymity, and informed consent are observed. All respondents are briefed  
regarding the purpose of the study, and their participation is entirely voluntary. Before participating, each  
respondent is asked to sign an informed consent form, acknowledging their understanding of the study’s  
objectives, procedures, and their right to withdraw at any point without any consequences.  
Page 1513  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Confidentiality is maintained by ensuring that no personally identifiable information is disclosed in any part of  
the research report. Data collected from respondents will be stored securely and will only be accessible to the  
research team. Anonymity is also upheld, as participants’ names and identities will not be linked to their  
responses in any published material.  
Instruments of the Study  
The use of data-gathering tools will make it easier for the researcher to obtain data, use it for analysis, and  
interpret it to provide the knowledge needed to finish this study. To collect the necessary data for the study,  
various research instruments are used to capture the quantitative data needed for the development and evaluation  
of the system.  
During the pre-development phase, the researcher will conduct interviews as an initial tool. These interviews  
will take place face-to-face at the institution, where necessary data will be collected from respondents. The  
interviews will involve structured conversations in which the researcher will ask questions, and the respondents  
will provide answers.  
To complement the objective data, surveys and questionnaires will be used to gather insights from users. These  
instruments are designed to measure user satisfaction, perceived usability, and the practical impact of the system.  
These instruments are designed to measure user satisfaction, perceived usability, and the practical impact of the  
system on daily academic and administrative activities. The surveys will include closed-ended questions for  
statistical analysis and user feedback. The goal of the survey will be to understand the difficulties faced by  
faculty and students regarding grade distribution and receipt, assess the current system, and compare it to the  
proposed system.  
Figure 3. Likert Scale for Level of Effectiveness/Satisfaction  
Figure 3 presents the Likert scale used as the basis for the questionnaire designed for this study. The  
questionnaire will undergo a validation process to ensure its reliability and content validity prior to  
administration. The survey items are adapted from the ISO/IEC 25010 Software Product Quality Model  
developed by the International Organization for Standardization (ISO), which provides a standardized  
framework for assessing software quality through internal and external quality characteristics such as  
functionality, usability, reliability, performance efficiency, security, maintainability, and portability  
(International Organization for Standardization [ISO], 2011).The validated questionnaire will be distributed to  
respondents using Google Forms to facilitate efficient and systematic data collection.  
Furthermore, structured interviews will be conducted with a selected group of participants to delve deeper into  
the system's effectiveness. These interviews aim to explore the personal experiences of faculty members,  
students and administrative staff with the system. By discussing their interactions and observations, the study  
will gain valuable insights into the system's usability, effectiveness, and areas for improvement.  
RESULTS AND DISCUSSIONS  
This chapter presents the results of the study through a comprehensive presentation, quantitative analysis, and  
interpretation of the data collected in accordance with the established research objectives. The findings are  
Page 1514  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
systematically organized based on the identified research problems and are supported by tables, figures, and  
narrative discussions that demonstrate how the data validate the evaluation of the Decision-Based Grading Model  
System and Student Performance Analysis Using a Rule-Based Algorithm. The primary aim of this chapter is to  
meaningfully interpret the quantitative results and derive insights that confirm the system’s effectiveness,  
usability, and functional performance.  
The data utilized in this phase of the study were obtained from respondents who were purposefully selected  
based on their expertise and relevant professional experience. These respondents consisted of School  
Administrators, IT Experts, and Faculty Members from various academic institutions and technology-related  
sectors. Their participation was essential, as their knowledge and experience in grading systems, educational  
management, and system development provided credible and valuable inputs for assessing the performance,  
reliability, and practicality of the proposed system.  
Results  
Demographic Profile of Respondents  
This presents the profile of the respondents who participated in the study. Each type of respondent was carefully  
selected based on their professional role and direct relevance to the study’s objectives. The combination of  
respondents from different fields ensures that the system is evaluated comprehensively from both educational  
and technological perspectives. Their input is essential for aligning the system’s features with real-world  
academic operations and technical requirements, making their feedback vital to the refinement and success of  
the proposed system.  
Figure 4 Frequency and Percentage Distribution of the Type of Respondent  
As presented in Figure 4, the respondents in this study represent three professional categories: School  
Administrators, IT Experts, and Faculty Members. The distribution of respondents reflects diverse perspectives  
necessary for evaluating both the technical and functional aspects of the system. School Administrators account  
for 6.7% (2) of the total respondents. Their participation is vital as they provide insights into the system’s  
potential contributions to academic management, grading processes, and institutional policy adherence. IT  
Experts comprise the largest group, representing 50% (15) of the respondents. Their technical expertise allows  
for critical assessment of the system’s architecture, security protocols, algorithm design, and overall software  
performance. Their evaluation ensures that the system meets the required technological standards for efficiency  
and reliability. The remaining 43.3% (13) are Faculty Members, whose feedback is essential in determining the  
system’s practicality in classroom application. Their role focuses on evaluating usability, relevance in daily  
teaching activities, and the system’s potential to minimize the manual workload associated with student  
assessments. The varied composition of the respondents provides a comprehensive analysis of the system from  
both administrative and technological perspectives, contributing significantly to the validation and refinement  
of the proposed model.  
Figure 5 Frequency and Percentage Distribution of Respondents in Terms of School/Company  
Page 1515  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
As shown in Figure 5, the respondents in this study are affiliated with various academic institutions and private  
organizations, ensuring a diverse and comprehensive evaluation of the system. This diversity provides balanced  
insights from both the academic sector and the IT industry, which is essential for assessing the system's technical  
feasibility and educational applicability.  
The largest group of respondents came from Quezon City University, representing 56.8% of the total  
participants. Their involvement is significant as they provide extensive academic feedback, particularly in  
understanding how the system functions within a real educational setting. This includes observations on  
usability, grading policies, and performance analysis integration.  
Accenture Philippines contributed 23.3% of the respondents. As a leading global IT services company, their  
participation ensures that the system’s design and technological aspects are aligned with industry standards.  
Their input is crucial in evaluating the technical robustness, data security, and scalability of the system.  
Other respondents came from Pamantasan ng Lungsod ng Maynila (6.7%), ANSI Information Systems, Inc.  
(3.3%), Enigma Sdn. Bhd. (3.3%), Presidential Management Staff (3.3%), and University of Caloocan (3.3%).  
These contributors provided additional perspectives from both academic institutions and government or private  
sectors, enhancing the study’s credibility by incorporating varied professional experiences.  
The participation of individuals from multiple organizations allows the study to gather comprehensive feedback  
on both the administrative use and the technological development of the system. Their insights play a vital role  
in ensuring the system is practical, user-friendly, and adaptable to various institutional contexts.  
Figure 6 Frequency and Percentage Distribution of the Age of Respondent  
As presented in Figure 6, the age distribution of the respondents demonstrates a relatively balanced  
representation across various age groups. The ages 27, 28, 29, 30, 31, 33, 34, 35, and 41 were the most frequently  
reported, each with a frequency of two respondents (6.67%), collectively holding the highest rank. In contrast,  
the remaining age groups 24, 25, 26, 32, 36, 37, 38, 39, 40, 45, 47, and 52 were each represented by one  
respondent (3.33%), sharing the second rank. This distribution indicates a concentration of respondents within  
the late twenties to mid-thirties, while still encompassing participants from both younger and older age brackets.  
Such diversity ensures the inclusion of perspectives from individuals at different stages of professional and  
personal development.  
Age is a relevant variable in this study, as it may influence respondents’ experiences, levels of technological  
familiarity, and professional judgment. Younger respondents may be more adept with contemporary  
technologies and innovative practices, whereas older respondents may contribute insights shaped by longer  
professional experience. Considering age in the analysis enhances the interpretation of the results and supports  
the validity of the findings by reflecting the perspectives of a diverse respondent population.  
Page 1516  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 7 Weighted Mean Rating on the Perspectives of the Respondents in Functional Suitability  
Figure 7 shows that the average mean scores are 4.86 for School Administrators and Faculty, 4.26 for IT Experts,  
and 4.56 overall, which is interpreted as Very Satisfied (VS). These results indicate that the respondents are  
generally very satisfied with the system. School Administrators and Faculty gave higher ratings, showing that  
the system is useful and appropriate for academic purposes. Although IT Experts gave slightly lower scores,  
their responses are still positive. Overall, the findings suggest that the system effectively meets user needs while  
allowing room for improvement.  
Figure 8 Weighted Mean Rating on the Perspectives of the Respondents in Compatibility  
Figure 8 shows the respondents' assessment of the system’s Compatibility, focusing on Co-existence and  
Interoperability. The School Administrators/Faculty gave high ratings in both criteria with an average mean of  
4.86 (Very Satisfied). On the other hand, IT Experts provided more moderate ratings, with an average mean of  
3.99 (Satisfied). The overall combined mean is 4.43, which falls under the Satisfied category.  
The results suggest that while the system is viewed as highly compatible from an academic perspective, technical  
experts recommend further improvements in system integration and interoperability to ensure smoother  
operation with other platforms and technologies.  
Figure 9 Weighted Mean Rating on the Perspectives of the Respondents in Usability  
Figure 9 presents the respondents' evaluation of the system’s Usability, focusing on six specific criteria:  
Appropriateness Recognizability, Learnability, Operability, User Error Protection, User Interface Aesthetics,  
and Accessibility. The School Administrators/Faculty reported high levels of satisfaction across all indicators,  
with an average mean of 4.77 (Very Satisfied). Conversely, the IT Experts gave slightly lower scores in each  
category, resulting in an average mean of 4.15 (Satisfied). The overall combined mean is 4.46, interpreted as  
Satisfied.  
Page 1517  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
The results suggest that the system is generally perceived as user-friendly, visually appealing, and accessible,  
particularly by academic users. However, the lower scores from IT experts highlight the need for further  
refinement in operability, learnability, and error protection to enhance the system’s overall usability for a wider  
range of users, including those with technical backgrounds.  
Figure 10 Weighted Mean Rating on the Perspectives of the Respondents in Reliability  
Figure 10 shows the respondents' assessment of the system’s Reliability, covering four criteria: Maturity,  
Availability, Fault Tolerance, and Recoverability. The School Administrators/Faculty expressed a high level of  
satisfaction, with an average mean of 4.83 (Very Satisfied), indicating strong confidence in the system’s stability  
and dependable performance. On the other hand, IT Experts provided a slightly lower average mean of 4.21  
(Satisfied), reflecting a more cautious view on the system’s resilience and error-handling capabilities. The overall  
combined mean is 4.52, which falls under Very Satisfied.  
The results suggest that the system is generally perceived as reliable, with strengths in availability and  
recoverability. While academic users are highly satisfied, technical experts recommend continued enhancements  
in fault tolerance and system maturity to ensure consistent performance, especially in larger or more complex  
environments.  
Figure 11 Weighted Mean Rating on the Perspectives of the Respondents in Security  
Figure 11 presents the respondents’ evaluation of the system’s security in terms of five key components:  
confidentiality, integrity, non-repudiation, accountability, and authenticity. The results show that School  
Administrators and Faculty rated the system highly across all security components, with an average mean of  
4.80, interpreted as Very Satisfied. In comparison, IT Experts provided slightly lower but still positive ratings,  
with an average mean of 4.21, interpreted as Satisfied. When combined, the overall mean score is 4.51, which  
corresponds to Very Satisfied. These findings indicate that the system is generally perceived as secure by the  
respondents, with strong confidence from academic users and positive acceptance from technical experts.  
The results indicate that the system is generally perceived as secure, with strong protection of data confidentiality  
and integrity. The academic respondents expressed high satisfaction, particularly in how the system ensures  
accountability and prevents unauthorized access. However, the IT experts' more cautious ratings suggest that  
while the system meets basic security standards, further technical validation and improvements in areas like non-  
repudiation and authenticity may be beneficial to enhance system trustworthiness in broader and more sensitive  
applications.  
Page 1518  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 12 Weighted Mean Rating on the Perspectives of the Respondents in Maintainability  
Figure 12 shows the respondents' evaluation of the system’s Maintainability, covering four criteria: Modularity,  
Reusability, Analyzability, and Testability. The School Administrators/Faculty rated the system highly, with an  
average mean of 4.77 (Very Satisfied). The IT Experts gave moderately lower ratings, resulting in an average  
mean of 4.28 (Satisfied). The overall combined mean is 4.53, which is interpreted as Very Satisfied.  
The findings suggest that the system is generally maintainable, with positive feedback on its modularity,  
reusability, and ease of analysis. These qualities indicate that the system is designed for efficient updates and  
debugging. However, the slightly lower ratings from IT Experts, particularly in Testability, highlight the need  
for further refinement in ensuring the system is easily testable and maintainable over time, especially from a  
technical management perspective.  
Figure 13 Weighted Mean Rating on the Perspectives of the Respondents in Portability  
Figure 13 presents the respondents' assessment of the system’s Portability, focusing on three indicators:  
Adaptability,  
Instability (interpreted  
as Installation Stability),  
and  
Replaceability.  
The  
School  
Administrators/Faculty provided very high ratings, with an average mean of 4.88 (Very Satisfied), while the IT  
Experts rated the system with an average mean of 4.26 (Satisfied). The overall combined mean is 4.53,  
interpreted as Very Satisfied.  
The results indicate that the system is highly portable, with strong adaptability to different environments and  
ease of installation and replacement as perceived by academic respondents. However, IT experts provided more  
moderate ratings, suggesting that while the system is generally portable, further technical assessments may be  
necessary to ensure smooth deployment across various platforms and operating conditions.  
Discussions  
Figure 14 Weighted Mean Rating on the Perspectives of the Respondents in Portability  
Page 1519  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Overall Weighted Mean Rating of the System, this presents the consolidated evaluation of the system based on  
the combined ratings across all quality criteria, including Functional Suitability, Performance Efficiency,  
Compatibility, Usability, Reliability, Security, Maintainability, and Portability. By summarizing the perspectives  
of both academic and technical respondents, this section provides a holistic view of the system’s overall  
performance and quality. The results serve as a comprehensive indicator of the system’s strengths, user  
satisfaction levels, and potential areas for improvement.  
In the area of Functional Suitability, the School Administrators/Faculty reported a weighted mean of 4.86  
(Highly Effective), while the IT Experts rated it at 4.26 (Effective). The combined overall mean is 4.56 (Highly  
Effective), reflecting the system’s strong alignment with user requirements and educational processes.  
For Performance Efficiency, the academic group rated the system 4.80 (Highly Effective), whereas the technical  
experts rated it 4.31 (Effective). The overall mean is 4.55 (Highly Effective), indicating that the system operates  
effectively in terms of processing time and resource utilization.  
Regarding Compatibility, the School Administrators and Faculty rated the system 4.86, interpreted as Highly  
Effective, while IT Experts gave a slightly lower rating of 3.99, also considered Highly Effective. The overall  
combined mean of 4.43 is interpreted as Effective, indicating that the system generally integrates well with  
academic workflows, although technical experts noted potential areas for improving system interoperability.  
In terms of Usability, the School Administrators and Faculty provided a rating of 4.77 (Highly Effective),  
whereas IT Experts rated it 4.15 (Effective). The overall mean of 4.46 reflects an Effective level of usability,  
suggesting that the system is user-friendly for non-technical users, but additional improvements in learnability  
and ease of operation could further enhance the user experience.  
For Reliability, the academic respondents provided a 4.83 (Highly Effective) rating, while the technical group  
scored it 4.21 (Effective). The overall mean is 4.52 (Highly Effective), reflecting confidence in the system’s  
stability, fault tolerance, and recovery mechanisms.  
In terms of Security, the School Administrators and Faculty rated the system 4.80, interpreted as Highly  
Effective, while IT Experts gave a rating of 4.21, considered Effective. The overall mean of 4.51 (Highly  
Effective) indicates that the system provides strong data protection and confidentiality features, although  
technical experts recommended minor improvements to enhance overall security assurance.  
Regarding Maintainability, the School Administrators and Faculty rated the system 4.77 (Highly Effective), and  
IT Experts gave a rating of 4.28 (Effective). The overall mean of 4.53 (Highly Effective) suggests that the system  
is easy to maintain, with positive feedback on its modular design, reusability, and analyzability, supporting  
efficient updates and long-term usability.  
Lastly, in the category of Portability, the School Administrators/Faculty rated the system 4.88 (Highly Effective),  
while the IT Experts gave a 4.26 (Effective). The overall combined mean is 4.53 (Highly Effective), indicating  
that the system is adaptable and can be installed and operated in various environments without significant issues.  
Determining the Flexibility of Minimum Required Scores Using the Decision-Based Grading Model Simulator  
In this study, the Decision-Based Grading Model Simulator is utilized to determine and provide flexibility in  
setting the minimum required scores for students to pass a subject. This feature addresses the limitations of  
traditional grading systems that often apply a fixed standard passing grade regardless of subject difficulty,  
competency focus, or specific assessment structures. The simulator introduces an adaptive grading approach by  
allowing administrators and faculty members to configure and adjust grading parameters according to academic  
and institutional requirements.  
The system operates through a rule-based algorithm that processes predefined variables such as subject difficulty  
levels, assessment weight distribution, learning objectives, historical data, and institutional grading policies.  
These variables are inputted by users through the system’s interface, where scenarios can be simulated and  
Page 1520  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
analyzed in real time. For instance, educators can adjust the percentage weights of exams, quizzes, projects, and  
participation, and immediately observe how these adjustments impact the minimum required scores for students  
to pass the course.  
This process allows for dynamic calculation of passing thresholds. In subjects classified as high-difficulty or  
skill-intensive, the system may recommend more flexible passing criteria to account for complex learning  
outcomes. Conversely, for fundamental or remedial courses, it can impose stricter thresholds to ensure mastery  
of essential competencies. The system also factors in student performance trends, such as pre-assessment scores  
or midterm results, to make the grading model responsive to the learners' actual progress.  
The flexibility feature of the Decision-Based Grading Model Simulator serves multiple purposes. It promotes  
academic fairness, ensures alignment with institutional grading policies, and accommodates varying levels of  
student performance. It also supports data-driven academic decision-making, enabling school administrators and  
faculty to establish customized grading schemes that are responsive to specific educational contexts.  
By integrating this flexible grading model, the system provides a transparent and consistent method for  
determining passing scores. Each computation is traceable and can be referenced in academic consultations or  
grade review processes, reinforcing the system’s role in enhancing both academic integrity and operational  
efficiency.  
This component of the system is quantitatively evaluated by analyzing its computational outputs under various  
scenarios and through structured feedback from faculty and administrators. Their assessments, gathered via  
survey instruments and simulations, help validate the system’s effectiveness in providing adaptive and equitable  
grading solutions.  
Data Collection Approach  
Respondents were asked to assess each recommended learning material or assessment strategy generated by the  
system. They indicated whether the recommendation was relevant and useful or not relevant. The results were  
compiled into a confusion matrix, categorizing the system’s predictions into four possible outcomes:  
True Positive (TP): Recommended and relevant  
False Positive (FP): Recommended but not relevant  
False Negative (FN): Not recommended but would have been relevant  
True Negative (TN): Not recommended and not relevant  
Figure 15 Confusion Matrix for Rule-Based Recommendation System  
Table 15 presents the confusion matrix used to evaluate the effectiveness of the Rule-Based Recommendation  
System. The confusion matrix is a standard tool in system evaluation that categorizes the system’s  
recommendations into four outcomes, providing a structured view of its predictive accuracy.  
In the context of this study, the system’s recommendations were assessed by experts and users, who verified  
whether each recommended learning material or assessment strategy was indeed useful or not. The resulting data  
were classified as follows:  
Page 1521  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
True Positive (TP): The system recommended a resource or strategy, and the respondents confirmed it  
was relevant and useful. There were 90 instances of true positives.  
False Positive (FP): The system recommended a resource, but it was found to be irrelevant or not useful  
by the respondents. There were 10 instances of false positives.  
False Negative (FN): The system did not recommend a resource, but the respondents identified that it  
should have been recommended because it was relevant. There were 15 instances of false negatives.  
True Negative (TN): The system correctly did not recommend materials that were indeed irrelevant or  
not useful, resulting in 85 instances of true negatives.  
This matrix is essential for calculating the precision, recall, and F1-score, which are key indicators of the  
system’s recommendation quality. The high number of true positives (90) and true negatives (85) reflects the  
system’s strong capability to correctly identify and recommend relevant resources while filtering out irrelevant  
ones. Meanwhile, the relatively low number of false positives (10) and false negatives (15) suggests that the  
system makes few incorrect recommendations and rarely overlooks important materials.  
Accuracy Metrics Computation  
Precision (Positive Predictive Value) - Precision measures the system’s ability to recommend relevant materials  
without including irrelevant ones. It answers the question: "Out of all the items recommended by the system, how  
many were actually relevant?"  
Precision  
푇푃  
푃푟푒푐푖푠푖표푛 =  
푇푃 + 퐹푃  
Equation 1 Precision Formula  
Where:  
TP (True Positives) = 90  
FP (False Positives) = 10  
90  
90  
푃푟푒푐푖푠푖표푛 =  
=
= 0.90 (90%)  
90 + 10  
100  
This means 90% of the recommendations made by the system were indeed useful and relevant according to the  
respondents.  
Recall (Sensitivity) - Recall measures the system’s ability to capture all relevant items. It answers the question:  
"Out of all the relevant materials available, how many did the system actually recommend?"  
Recall  
푇푃  
푅푒푐푎푙푙 =  
푇푃 + 퐹푁  
Equation 2 Recall Formula  
Where:  
TP (True Positives) = 90  
Page 1522  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
FP (False Positives) = 10  
90  
90  
푅푒푐푎푙푙 =  
=
= 0.857 (86%)  
90 + 15  
105  
This indicates that the system was able to identify 86% of the relevant resources, missing only a small portion.  
F1-Score (Harmonic Mean of Precision and Recall) - The F1-Score provides a balance between Precision and  
Recall, especially when both are important for system evaluation. It is calculated using the harmonic mean,  
which penalizes extreme differences between precision and recall.  
F1-Score  
푃푟푒푐푖푠푖표푛 × 푅푒푐푎푙푙  
퐹1 − 푆푐표푟푒 = 2 푥  
푃푟푒푐푖푠푖표푛 + 푅푒푐푎푙푙  
Equation 3 F1-Score Formula  
Substituting the computed values:  
(
)
0.90 × 0.86  
0.774  
1.76  
퐹1 − 푆푐표푟푒 = 2 푥  
= 2 푥  
= 2 푥 0.44 = 0.88 (88%)  
0.90 + 0.86  
The F1-Score of 88% confirms that the system maintains an effective balance between recommending relevant  
materials and capturing most of them.  
The results of the accuracy metrics indicate that the Rule-Based Recommendation System performs at a high  
level of effectiveness. The Precision score of 90% suggests that most of the system’s recommendations were  
correct, meaning they were perceived by the respondents as useful and relevant to the students’ academic needs.  
The Recall score of 86% indicates that the system successfully recommended almost all the relevant learning  
materials, minimizing the instances of missing important resources.  
The F1-Score of 88% confirms that the system achieves a good balance between precision and recall. This means  
that it not only recommends relevant resources most of the time but also ensures that most relevant materials are  
captured in the recommendation list. Such performance demonstrates the system’s capability to provide  
personalized and effective learning support while reducing irrelevant suggestions.  
CONCLUSIONS AND RECOMMENDATIONS  
This chapter presents the conclusions and recommendations based on the results of the study. It summarizes the  
main findings that show how the proposed system helps improve academic grading and student performance  
monitoring. The system provides an automated and data-driven approach that allows flexible grading and  
supports instructors in evaluating student performance more effectively.  
Conclusions  
Page 1523  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
Figure 16 Comparative Benchmarking of System Performance Metrics with Existing Student Performance  
Prediction Models  
Figure 16 shows comparative benchmarking demonstrates that the Decision-Based Grading Model System  
performs competitively against multiple existing student performance prediction models. While some deep  
learning models achieve slightly higher metrics, the system maintains a strong balance between precision and  
recall with the added benefit of rule-based interpretability. Compared to traditional educational data mining  
models, the system clearly surpasses them in accuracy and reliability, supporting its effectiveness in grading and  
student performance analysis.  
Problem 1: What are the salient features of the Decision-Based Grading Model Simulator?  
Based on the quantitative analysis of Likert-scale survey responses from school administrators, faculty members,  
and IT experts, the Decision-Based Grading Model Simulator was identified as a comprehensive and effective  
academic tool with several salient features. The results highlight its flexible grading parameter adjustment, which  
allows institutions to dynamically define and modify passing thresholds based on academic policies, supporting  
inclusive and competency-based assessment models. The system also features a rule-based recommendation  
mechanism that provides automated, personalized learning resource suggestions for students who require  
remediation, reducing instructors’ manual workload. Its decision support component offers real-time data  
visualization, performance summaries, and predictive insights that aid educators and administrators in informed  
academic decision-making.  
Respondents further emphasized the system’s user-centric design, noting its intuitive interface and ease of use  
even for non-technical users, as well as its interoperability with existing learning management and institutional  
systems. Strong security and confidentiality measures, including authentication, encryption, and access controls,  
ensure data protection, while its modular architecture supports maintainability and scalability. Additionally, the  
simulator enables continuous performance monitoring and immediate feedback generation, promoting  
transparency and student engagement. The findings confirm that the simulator goes beyond automated grading  
by integrating flexible assessment, personalized support, administrative decision tools, usability, and robust  
security, and its high ratings under the ISO 25010 Software Quality Model validate its technical reliability,  
efficiency, and suitability for academic environments.  
Problem 2: How can the Decision-Based Grading Model Simulator determine and provide flexibility in terms of  
the minimum required scores for students to pass a subject based on predefined parameters?  
The findings show that the Decision-Based Grading Model Simulator provides a simple and flexible way to  
determine passing scores for students. Instead of using fixed cut-off grades, the system uses rule-based  
calculations that consider course difficulty, assessment weights, school policies, and student performance. It can  
adjust passing scores in real time and allows teachers and administrators to test different grading scenarios before  
finalizing results. This helps ensure fairness and consistency in grading while reducing manual work for  
instructors. The system also makes grading more transparent for students by clearly showing how scores are  
determined. Overall, the results confirm that the simulator is an effective tool for improving grading flexibility,  
accuracy, and fairness in academic assessment.  
Problem 3: How effective is the rule-based recommendation system in suggesting appropriate learning resources  
and assessment strategies for students?  
The results show that the rule-based recommendation system in the Decision-Based Grading Model Simulator  
is effective in providing personalized and data-driven learning support for students. Using students’ performance  
data, the system accurately recommends appropriate learning materials and assessments, as shown by strong  
performance scores in precision (80%), recall (89%), and F1-score (84%). These results indicate that the system  
delivers relevant recommendations while successfully identifying most learning gaps. By automatically  
suggesting remedial or enrichment activities based on predefined academic rules, the system supports  
individualized learning without increasing teachers’ workload. Feedback from faculty, IT experts, and  
administrators confirms that the system is reliable, efficient, and aligned with institutional goals. Overall, the  
Page 1524  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
recommendation system enhances student engagement, supports informed academic interventions, and  
contributes to a more student-centered and data-driven learning environment.  
Problem 4: How effective is the proposed system as perceived by Experts using Software Quality Model criteria?  
The effectiveness of the Decision-Based Grading Model Simulator was evaluated by faculty members, IT  
professionals, and academic administrators using the Software Quality Model criteria, covering functional  
suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability.  
Based on quantitative analysis of Likert-scale responses, the system achieved a weighted mean score of 4.51,  
interpreted as Highly Effective, indicating strong agreement among experts regarding its quality and usability.  
The results showed that the system performs well in delivering accurate and flexible grading functions, efficient  
data processing, reliable operation, and secure handling of academic records, while remaining user-friendly and  
adaptable across different platforms. Although minor improvements were suggested, particularly in system  
integration and user support materials, the findings confirm that the simulator meets established software quality  
standards. The quantitative evaluation provides strong evidence that the system is technically reliable,  
educationally relevant, and ready for adoption in academic institutions to support modern, data-driven grading  
and student performance analysis.  
RECOMMENDATIONS  
This section presents the recommendations derived from the findings of the study on the Decision-Based Grading  
Model Simulator. Based on the results of the system evaluation and expert assessments, the recommendations  
aim to guide academic institutions in effectively adopting, implementing, and sustaining the simulator to enhance  
grading flexibility, accuracy, and data-driven decision-making. These recommendations focus on system  
integration, policy alignment, continuous improvement, and long-term sustainability to ensure that the simulator  
remains reliable, secure, and adaptable to evolving educational needs.  
It is recommended that academic institutions integrate the Decision-Based Grading Model Simulator into their  
existing academic management systems to fully utilize its dynamic grading features. Regular training programs  
for faculty members and administrators should be conducted to ensure effective use of customizable grading  
parameters and adaptive analysis tools. Continuous improvement of the recommendation system is also advised  
by expanding its database of learning materials and interventions across various subjects and competencies. In  
addition, future updates should support cross-platform and Learning Management System (LMS) integration to  
streamline grade input and output processes. Collecting regular user feedback is recommended to further  
improve the system’s interface and navigation for both technical and non-technical users.  
Policy Alignment and Predictive Enhancements Academic departments are encouraged to adopt the simulator’s  
flexible grading mechanism to ensure that passing thresholds align with real-time student performance data.  
Institutions should establish clear guidelines for adjusting grading parameters to prevent misuse and maintain  
academic integrity. Forming a review committee or academic council to oversee grading recalibration is also  
recommended to ensure compliance with institutional policies and accreditation standards.  
Expansion of Recommendation System Capabilities Although the recommendation system has demonstrated  
high effectiveness, it is recommended to further expand the learning resource database through collaboration  
with faculty members. This includes adding updated, localized, and multimedia learning materials. Integrating  
AI-driven adaptive learning pathways is also suggested to allow recommendations to evolve based on individual  
student progress. Periodic system audits should be conducted to ensure the relevance, accuracy, and curricular  
alignment of recommendations. Faculty workshops on interpreting recommendation reports are likewise  
encouraged to strengthen data-driven instructional practices.  
Sustainability, Security, and Scalability, Given the strong performance across software quality dimensions,  
continuous system maintenance and quality assurance are recommended to sustain functionality, reliability, and  
efficiency. Institutions should conduct regular security audits to protect sensitive academic data and uphold  
confidentiality standards.  
Page 1525  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
REFERENCES  
1. Akhtar, B. (2021). An automated grading and feedback system for a computer literacy course.  
2. Binti Yahaya, A. (2022). Assessment Management System For Centralized Grade Reporting.  
3. Chicco, D., & Jurman, G. (2020). The advantages of the Matthews correlation coefficient (MCC) over  
F1 score and accuracy in binary classification evaluation. BMC Genomics, 21(1), 6.  
4. Centeno, C. J., De Guzman, E. S., Bauat, R. V., Espino, J., & Victoriano, J. M. (2023). Utilization and  
pre-processing of Marilao, Meycauayan, and Obando River System dataset using Excel and Power  
Business Intelligence for descriptive analytics and visualization. Cosmos: An International Journal of  
Management, 12(2), JanuaryJune. ISSN: 2278-1218.  
5. James, G., Witten, D., Hastie, T., & Tibshirani, R. (2023, June 30). An introduction to statistical learning.  
6. Koren, Y., Rendle, S., & Bell, R. (2021). Advances in Collaborative Filtering. In Recommender Systems  
7. Matthew, K., Janicki, T., He, L., & Patterson, L. (2012). Implementation of an Automated Grading  
System with an Adaptive Learning Component to Affect Student Feedback and Response Time. Journal  
8. McAffee, A., & Brynjolfsson, E. (2024, February 29). Big Data: The management revolution. Harvard  
9. Messing, A., & Hutchinson, S. (2021). Forward chaining Hierarchical Partial-Order planning. In Springer  
proceedings in advanced robotics (pp. 364380). https://doi.org/10.1007/978-3-030-66723-8_22  
10. Mohammadpour, T., Bidgoli, A. M., Enayatifar, R., & Javadi, H. H. S. (2019). Efficient clustering in  
collaborative filtering recommender system: Hybrid method based on genetic algorithm and gravitational  
emulation  
local  
search  
algorithm.  
Genomics,  
111(6),  
19021912.  
11. Muklason, A., Irianti, R. G., & Marom, A. (2019). Automated course timetabling optimization using  
Tabu-Variable Neighborhood Search based Hyper-Heuristic Algorithm. Procedia Computer Science,  
12. Musa, A. B., & Sa’ad, S. M. (2023). Impact of Heuristic Approach on Students’ Academic Achievement  
and Retention in Map Reading and Interpretation Among Secondary Schools, Municipal Zones, Kano-  
13. Mustapa, M. A., Salahuddin, L., & Hashim, U. R. (2022). Automated Study Plan Generator using Rule-  
based  
and  
Knapsack  
Problem.  
Automated_Study_Plan_Generator.pdf  
14. Padhma, M. (2024, June 13). A Comprehensive Introduction to Evaluating Regression Models.  
15. Provost, F., & Fawcett, T. (2023). Data Science and its relationship to big data and data-driven decision  
16. Rosas, P., Ríos-Solís, Y. Á., & Romeo Sánchez Nigenda. (2023). Scheduling personalized study plans  
considering  
the  
stress  
factor.  
Interactive  
Learning  
Environments,  
120.  
17. Rotty, A. K., Dewayana, T. S., & Habyba, A. N. (2022). Cross-Industry Standard Process for Data  
Mining (CRISP-DM) Approach in Determining the Most Significant Employee Engagement Drivers to  
Sales at X Car Dealership. Proceedings of the 3rd Asia Pacific International Conference on Industrial  
18. Rudiarto, S., Dwiasnati, S., & Irawan, A. S. Y. (2022). Design and build expert system applications using  
forward chaining methods to manage web-based space management. Global Journal of Engineering and  
19. Sandra, L., Lumbangaol, F., & Matsuo, T. (2021). Machine Learning Algorithm to Predict Student’s  
Performance:  
A
Systematic  
Literature  
Review.  
Page 1526  
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,  
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)  
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue XII, December 2025  
20. Sarmiento, L. I. M., Tesoro, S. K. M., Gagno, P. A. M., Batista, R. T. B., Quiwa, E. P., & Naval Jr, P. C.  
(2025). CSS: A Course Scheduling System using Genetic Multiobjective Optimization.  
21. Sayed, B. T. (2021). Application of Expert Systems or Decision-making Systems in the Field of  
22. Şekeroğlu, B., Dimililer, K., & Tuncal, K. (2019). Student Performance Prediction and Classification  
Using  
Machine  
Learning  
Algorithms.  
Association  
for  
Computing  
Machinery.  
23. Servaz, B. C. O., Pradas, J. J. N., Sergio, J. E. A., De Goma, J. C., & Zara, S. I. (2025). Examination  
Scheduling System for Mapua University.  
24. Stephen Few. (2020). Now you see it: Simple visualization techniques for quantitative analysis.  
Page 1527