INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 712
Data Privacy Dominance: An Empirical Investigation into Nigerian
Postgraduate Students' Prioritization of AI Ethical Concerns in
Higher Education
Simeon Ayoade Adedokun, Dorcas Atinuke Adedokun, Stella Kehinde Ogunkan, Olufunke Kemi Ogunniyi
Department of Computer Science, Ladoke Akintola University of Technology, Ogbomoso, Nigeria.
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1410000088
Received: 20 October 2025; Accepted: 27 October 2025; Published: 12 November 2025
Abstract – The integration of Artificial Intelligence (AI) into Higher Education Institutions (HEIs) promises significant pedagogical and
administrative efficiencies, yet it concurrently introduces profound ethical dilemmas, particularly in data-rich environments. The study
empirically investigates the prioritization of major AI ethical concerns—Data Privacy, Algorithmic Bias/Fairness, Transparency, and
Accountability—among Nigerian postgraduate students. Utilizing a quantitative survey with 300 strategically selected postgraduate students
across 2 federal universities and 2 state universities, data were collected using a structured questionnaire. The study, anchored in the Ethical
Data Governance Framework (EDGF), addressed the question of what the major ethical concerns are and tested two null hypotheses on the
significant differences in awareness and the prioritization of these concerns. Descriptive statistics, non-parametric Friedman Test, and
Independent Samples t-test were employed for analyses. Although the findings revealed a high level of overall awareness regarding AI ethical
implications (X̄ = 3.12), there is a significant difference in the awareness of ethical implications between students who have encountered AI
applications (X̄ = 3.29) and those who have not (X̄ = 2.65). The Friedman Test and Independent Samples t-test unequivocally demonstrated
a significant statistical difference in prioritization (Friedman, X2 = 12.34, p-value < 0.05; t-value of 4.10, p-value < 0.05), leading to the
rejection of HO2. Data Privacy emerged as the overwhelmingly dominant ethical concern with a Weighted Mean Score (WMS) = 3.55,
followed by Transparency (WMS = 3.38), reflecting a deep-seated trust deficit in institutional data stewardship and a strong student demand
for Explainable AI (XAI). Students who have encountered AI applications demonstrated significantly higher awareness (X̄ = 3.29) and
significantly higher overall prioritization (X̄ = 3.33) of ethical concerns compared to those who have not. This paper recommends that Nigerian
HEIs must urgently adopt the EDGF principles by implementing stringent data privacy and transparency protocols with robust procedural
tools to address data protection and enforce algorithmic accountability. This should be coupled with experiential AI training to foster trust and
ensure the responsible adoption of AI in Nigerian higher education.
Keywords – Artificial Intelligence, AI, Data Privacy, Higher Education, Ethical Concerns, EDGF, Postgraduate Students, Nigeria, Bias,
Transparency, Accountability, Prioritization
I. Introduction
The rapid advancement of Artificial Intelligence (AI) has initiated a perspective shift across various sectors, with higher education (HE) being a
key area of transformation [1], [2]. In the Nigerian educational system, the adoption of AI technologies—ranging from intelligent tutoring systems
to automated administrative platforms—is seen as a viable solution to persistent challenges such as resource constraints and scalability [3], [4].
AI’s promise lies in its ability to personalize learning experiences, streamline assessment, and manage vast institutional data, thereby optimizing
educational delivery.
However, this increased reliance on data-driven AI systems brings to the fore significant ethical complexities that demand scrutiny. The ethical
deployment of AI fundamentally hinges on addressing core concerns, including algorithmic bias, transparency, accountability, and the
protection of sensitive personal information. In an academic context where continuous student activity generates massive data streams,
including assessment scores, learning styles, and demographic information, the issue of data privacy becomes paramount [5]. In a region
grappling with developing robust regulatory frameworks, the perceived security of student data is particularly vulnerable [6], [7].
Therefore, understanding how the end-users who are deeply engaged in research and often interact with diverse AI tools prioritize these ethical
concerns is crucial for establishing effective institutional governance and policy [8]. The failure to align institutional AI deployment strategies
with student ethical expectations risks eroding trust, stifling adoption, and potentially violating individual rights, thus hindering the responsible
integration of AI.
II. Literature Review
This study draws upon the Ethical Data Governance Framework (EDGF), a model that extends traditional IT governance to specifically
address the normative and ethical challenges inherent in data-intensive environments like AI-enabled higher education. The EDGF is chosen
over other models, such as the Technology Acceptance Model (TAM), which focuses only on user acceptance, because it encompasses the
study's four core ethical concerns: Data Privacy, Algorithmic Bias, Transparency, and Accountability [9], [10]. The EDGF establishes that
ethical AI adoption is not merely a technical task but a matter of institutional stewardship defined by three pillars, which are data protection
and privacy, algorithmic accountability and transparency, and stakeholder engagement and empowerment. The deployment of AI in
educational settings is typically evaluated against the interconnected ethical pillars. [11], [12].
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 713
Data Protection and Privacy
Data Privacy refers to the governance and security of personally identifiable information (PII) collected from students, ensuring it is collected
legally, used for its intended purpose, and protected from unauthorized access or breaches [13], [14]. The EDGF mandates that institutions
must move beyond legal compliance to adopt an ethical stewardship approach, ensuring data is not just legally protected but managed in a
trustworthy, rights-respecting manner. Users often have limited trust in institutional data handling practices [11]. Given that AI systems are
fundamentally data processing tools, this pillar is often considered the foundation of ethical use.
Data privacy stands out in the literature as one of the most critical and tangible ethical concerns in AI-driven education, particularly in contexts
where digital infrastructure and regulatory enforcement are still maturing [6]. The reliance of personalized learning and predictive analytics
on continuous student data collection generates vast profiles that, if compromised, pose serious risks to individual rights and security [2], [12].
Ethical stewardship requires institutions to manage data in a way that ensures data is protected from unauthorized access, purpose drift, and
secondary use, particularly when it feeds opaque academic decision-making algorithms (Afolabi, 2024). This pillar, therefore, mandates the
proactive implementation of measures like Privacy by Design (PbD) and granular, explicit consent processes that uphold individual data rights
and rebuild confidence in the university's role as a reliable data custodian [11].
Research conducted by Afolabi et al. across African higher education institutions noted concerns about the lack of dedicated data protection
officers and transparent data collection notices [13]. Similarly, the study by Binitie et al. emphasized the urgent need for robust data governance
frameworks specifically to safeguard student PII from unauthorized access and reuse by third-party AI vendors [1]. This student-centered
anxiety is understandable, unlike algorithmic bias, which is often an abstract concept, data privacy breaches represent a direct, verifiable threat
[7]. The Nigerian Data Protection Regulation (NDPR) provides a legal scaffold, yet its effective implementation and enforcement within the
decentralized structure of universities remain challenging, thereby exacerbating student concerns [6]. The study posits that due to the concrete
and individual nature of the threat, data privacy is likely to be prioritized above the other, more abstract, ethical concerns by postgraduate
students.
Algorithmic Accountability and Transparency
The algorithmic accountability and transparency pillar provides the framework for understanding the secondary concerns of bias,
transparency, and accountability. Algorithmic Bias/Fairness addresses the risk that AI systems, trained on incomplete or historically skewed
data, may perpetuate or amplify existing societal inequities, leading to discriminatory outcomes in grading, resource allocation, or learning
opportunities for specific student groups [5], [14]. The EDGF requires that algorithmic systems used in academic settings must be both
transparent (allowing for auditable inspection of data inputs and logic) and accountable (designating clear human responsibility for the
outcome of AI decisions). The lack of Explainable Artificial Intelligence (XAI) mechanisms in current systems is a direct violation of this
pillar, which may lead to student concerns [15], [16].
Algorithmic bias represents a significant threat to equity and fairness within higher education. It arises when AI systems are trained on data
sets that reflect existing social, economic, or historical biases, inadvertently leading to discriminatory outcomes against certain groups of
students [14]. In the context of Nigerian universities, where diversity in student background and access to resources can be pronounced,
systems used for admission screening, grading, or predictive student success modeling are vulnerable to embedding and amplifying these
disparities [3].
The inherent risk is that biases, once codified into an algorithm, become systemic and difficult to detect, disproportionately affecting vulnerable
or underrepresented populations. For example, a predictive model trained on historical data where certain groups performed poorly due to
structural disadvantages may unfairly penalize new applicants from those same groups, regardless of their current potential [5]. Therefore, the
ethical responsibility of institutions extends beyond simply ensuring data security to actively auditing and mitigating bias to maintain fairness
in educational opportunities.
Transparency, often referred to as explainability, is crucial for maintaining student trust and ensuring the legitimate use of AI in education [4].
The "black box" problem occurs when the inner workings of complex AI models—especially deep learning networks—are opaque, making
it impossible for administrators, students, or regulators to understand why a particular decision, such as a grade, a resource recommendation,
or an academic misconduct flag, was reached [2].
A lack of transparency hinders accountability and prevents users from challenging unfair outcomes, effectively removing the human element
from critical educational processes [12]. Several researchers emphasize that educational institutions must demand not just accurate, but also
interpretable, AI tools [1]. Postgraduate students, in particular, need to understand the mechanisms of AI tools used for academic integrity
checks or literature review generation, as these systems directly impact their scholarly work and future career prospects.
Transparency requires that the processes and decisions made by AI systems are comprehensible and explainable to users and stakeholders,
moving away from the "black box" phenomenon [4]. Finally, Accountability concerns establishing clear mechanisms for attributing
responsibility when AI-driven decisions result in harm or error, ensuring human oversight and redress [6]. These four concerns are often
studied as a set, yet their relative importance can vary significantly based on cultural context and user experience [7].
Accountability requires establishing clear lines of responsibility for AI-driven outcomes. As AI systems become more autonomous,
determining who is at fault when a system malfunctions, makes a flawed decision, or causes harm becomes increasingly complex [8]. In
higher education, the failure of an AI system to accurately assess student performance or manage sensitive data necessitates a clear framework
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 714
for redress and consequence. Luwoye et al. emphasize the institutional obligation to maintain human oversight, even in highly automated
processes [6]. True accountability requires universities to ensure that human users are trained to monitor AI outputs; establish formal appeal
processes for AI decisions; and ensure that contracts with AI vendors clearly define legal liabilities [17]. This principle ensures that the
institution, as the ultimate custodian of the student experience, retains final responsibility for the ethical and reliable deployment of all
technology.
Stakeholder Engagement and Empowerment
The stakeholder engagement and empowerment pillar addresses the need for bottom-up governance to validate the ethical effectiveness of
the first two pillars. It posits that effective ethical governance requires the active participation and empowerment of key stakeholders, primarily
the students themselves, whose data is being processed and whose academic outcomes are being influenced by AI [18].
Engagement extends beyond simple consultation; it necessitates the establishment of structured feedback loops and grievance mechanisms
that are readily accessible to students and are taken seriously by institutional authorities. Through the understanding of the students'
prioritization of concerns, specifically the high demand for Data Privacy and Transparency, the EDGF guides institutions to establish policies
that are informed by the highest prioritized ethical risks, making ethical policy actionable and culturally relevant. This empowerment ensures
that AI governance is not merely bureaucratic but truly represents the values and fears of the academic community it serves, ultimately
translating theoretical ethics into practical institutional responsibility [19].
While the ethical use of AI is widely debated, the reviewed literature highlights a theoretical consensus on the importance of core ethical
pillars—privacy, fairness, transparency, accountability, and stakeholder engagement and empowerment—yet reveals a divergence in how
these concerns are prioritized by students. Regulatory and academic perspectives often treat these domains with equal weight [17], but student-
centric studies tend to emphasize more immediate, self-referential risks, particularly in environments with low digital trust. Data privacy
breaches, due to their personal and tangible impact, frequently overshadow more abstract concerns like algorithmic bias [7], [13].
Despite the recognized need for AI ethics education, few empirical studies have examined the specific hierarchy of ethical concerns among
Nigerian postgraduate students. This gap limits the ability of university administrators to prioritize effectively, leaving policy and infrastructure
development without clear direction. The study addresses the need to clarify students’ ethical priorities, guiding more responsive and focused
policy development. This gap is particularly problematic given the global rise in data breaches and the sensitivity of student data. Without
empirical evidence distinguishing the most pressing ethical concern, institutional efforts to govern AI may be misdirected, resulting in
fragmented policies that do not address the primary anxieties of the student population.
Research Questions
1. What is the level of awareness of the ethical implications of AI among postgraduate students?
2. What are the major ethical concerns regarding the use of AI as prioritized by postgraduate students?
Research Hypotheses
1. There is no significant difference in the awareness of the ethical implications of AI between postgraduate students who have encountered
AI applications and those who have not.
2. There is no significant difference in the prioritization of ethical concerns among postgraduate students.
III. Methodology
The study adopted a descriptive survey research design. This design is appropriate because the primary purpose of the investigation is to
systematically describe the characteristics of a given population, specifically, the level of awareness and the pattern of ethical concern
prioritization among Nigerian postgraduate students. The target population comprised postgraduate students from four Nigerian universities,
purposefully selected to ensure representation across different ownership and mandate types. The selected universities were the National Open
University of Nigeria (NOUN – a Federal/Open Distance Learning University), University of Ibadan (UI – A Federal/Conventional
University), Ekiti State University (EKSU - a State/Conventional University), and Ladoke Akintola University of Technology, Ogbomoso
(LAUTECH – a State/Conventional University). The diverse sample, spanning Federal and State universities, enhances the external validity
of the findings across the Nigerian higher education settings.
The study utilized stratified random sampling to draw 75 respondents from each of the four institutions. A total of 300 postgraduate
respondents, including students enrolled in master’s and PhD programmes, were selected. This ensured that the sample accurately reflects the
varying exposure and experience levels potentially linked to AI usage in different postgraduate research stages, with participants having an
equal and fair chance of being selected.
The instrument for data collection was a questionnaire structured into four sections. Section A was designated for demographic data, while
Section B comprised simple Yes/No questions to determine students' encounter with AI applications. Section C measured general awareness
using a 5-point Likert scale, and Section D was dedicated to measuring the prioritization of core ethical concerns, including data privacy,
algorithmic bias/fairness, transparency, and accountability, using a 5-point Likert prioritization scale. The survey was conducted online using
Google Forms, shared with participants electronically via email, WhatsApp, Telegram, and Facebook. To assess the internal consistency of
the questionnaire items, the reliability of the instrument was determined using the Cronbach’s Alpha (α) coefficient in SPSS. A pilot study
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 715
involving 30 non-participating postgraduate students was conducted with a calculated coefficient of 0.718, which means that the scale has
good and acceptable internal reliability. The collected data were analyzed using both descriptive and tested using t-test inferential statistics
facilitated by SPSS software at the 0.05 level of significance.
IV. Results
The demographic data presented in Table 1 shows a total sample size of 300 respondents. The gender distribution shows that the sample
comprised a slightly higher proportion of Male participants (55.0%, n=165) compared to Female participants (45.0%, n=135). The degree-
level distribution was 66.7% Master’s and 33.3% Ph.D. students, reflecting the higher enrollment rates typically observed at the Master's level
while still ensuring sufficient representation from both levels for enhanced generalizability. The data revealed a strong level of engagement
with the technology, as 75.0% (n=225) of the postgraduate students confirmed they had encountered AI applications, while 25.0% (n=75)
had not.
Table 1 Demographic Distribution of Respondents
Variable Category Frequency (N=300) Percentage (%)
Gender
Male 165 55.0
Female 135 45.0
Programme Level
Master 200 66.7
Ph.D. 100 33.3
Encountered AI
Yes 225 75.0
No 75 25.0
Research Question 1: What is the level of awareness of the ethical implications of AI among postgraduate students?
The analysis presented in Table 2 shows that the overall mean awareness score for postgraduate students on the ethical implications of AI is
3.12, with a standard deviation of 0.75. Since the mean value of 3.12 is significantly higher than the criterion mean of 3.0, the result indicates
that postgraduate students exhibit a high level of awareness regarding the ethical implications of AI in higher education.
Table 2 Descriptive Statistics for Overall Ai Ethical Awareness
Variable N Mean (X̄) Standard Deviation (SD) Decision (X̄ > 2.50) Interpre-tation
Overall Awareness Score 300 3.12 0.75 Aware High Level of Awareness
Research Question 2: What are the major ethical concerns regarding the use of AI as prioritized by postgraduate students?
Table 3 reveals a clear hierarchy in the postgraduate students' prioritization of AI ethical concerns. Data Privacy was ranked highest with the
largest Weighted Mean Score (WMS = 3.55), followed closely by Transparency (WMS = 3.38) in second place. Algorithmic Bias/Fairness
was ranked third (WMS = 3.12), and Accountability received the lowest prioritization (WMS = 3.01).
Table 3 Friedman Test Results on Prioritization of AI Ethical Concerns
Ethical Concern Mean Rank
Weighted Mean
Score (WMS)
Ranking Order Interpretation
Data Privacy 3.55 3.55 1st Highly Prioritized
Transparency 2.38 3.38 2nd Highly Prioritized
Algorithmic Bias/Fairness 2.05 3.12 3rd Prioritized
Accountability 2.02 3.01 4th Prioritized
Friedman Test Statistic X2 = 12.34
P-value p < 0.05
Effect Size Kendall's W = 0.15
Testing of Hypotheses
Ho1: There is no significant difference in the awareness of the ethical implications of AI between postgraduate students who have
encountered AI applications and those who have not.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 716
Table 4 Independent Samples T-Test for Awareness Based on Ai Encounter
Group Encountered AI Not Encountered AI
N 225 75
Mean (X̄) 3.29 2.65
SD 0.58 0.72
t-value 3.84
df 298
p-value (Sig. 2-tailed) 0.000
Decision Reject Ho1
The Independent Samples t-test analysis in Table 4 yielded a calculated t-value of 3.84 with a p-value of 0.000. Since the p-value (0.000) is
less than the alpha level of significance (0.05), the null hypothesis of no significant difference is rejected. This implies that there is a significant
difference in the awareness of ethical implications between students who have encountered AI applications (X̄ = 3.29) and those who have
not (X̄ = 2.65). Students who have encountered AI applications exhibit a significantly higher level of awareness.
Ho2: There is no significant difference in the prioritization of AI ethical concerns among Nigerian postgraduate students.
The null hypothesis 2 (Ho2) was tested using both the Independent Sample t-test analysis and the Friedman Test. The Friedman test was used
for rank-ordered data, while the t-test compared mean prioritization scores between groups. The t-test analysis of prioritization of ethical
concerns based on AI encounter, as shown in Table 5, produced a calculated t-value of 4.10 with an associated p-value of 0.000. Given that
the p-value (0.000) based on AI encounter is less than the 0.05 level of significance, the null hypothesis (Ho2) is rejected. This result indicates
that students who have encountered AI applications (X̄ = 3.33) demonstrate a significantly higher prioritization of AI ethical concerns overall
compared to students who have not (X̄ = 2.89).
Given the rank-ordered nature of the data, the Friedman Test was employed to determine if the differences in the students' prioritization
rankings were statistically significant. The results of the Friedman Test in Table 3 showed a highly significant difference in the prioritization
of the four ethical concerns (X2 = 12.34, p < 0.05). The calculated chi-square value was statistically significant, indicating that the postgraduate
students do not prioritize these four concerns equally. Therefore, based on the Friedman Test, the null hypothesis (Ho2) is rejected. The effect
size, calculated using Kendall's W, was 0.15 (W = 0.15), indicating a medium effect size for the agreement among students regarding the
ranking order shown in Table 3 (Data Privacy > Transparency > Bias > Accountability). The dominance of Data Privacy (ranked 1st) and
Transparency (ranked 2nd) is therefore statistically confirmed.
Table 5 Independent Samples T-Test for Prioritization Based on Ai Encounter
Group Encountered AI Not Encountered AI
N 225 75
Mean (X̄) 3.33 2.89
SD 0.45 0.5
t-value 4.1
df 298
p-value (Sig. 2-tailed) 0.000
Decision Reject Ho2
The overall mean score for student awareness was X̄=3.12 as shown in Table 2 (on a 5-point scale), indicating a moderate to high level of
general awareness regarding AI ethical issues. The level of awareness, combined with the clear prioritization established in Table 3, confirms
that postgraduate students possess sufficient foundational knowledge to differentiate and rank perceived ethical risks.
V. Discussion
The study's results, indicating a high overall mean awareness score (X̄ = 3.12), strongly suggest that Nigerian postgraduate students are not
oblivious to the ethical dimensions of AI use. This high level of awareness is likely attributable to the open nature of education being adopted
in Nigeria, which necessitates greater self-directed learning and exposure to global digital trends [6]. This finding is consistent with global
studies that report increasing digital and ethical literacy among technologically engaged student populations [5], but contradicts some earlier
Nigerian studies suggesting a general low awareness regarding emerging technologies [20]. The finding is a positive indicator that the primary
battle is not against ignorance, but rather one of defining and addressing the most critical ethical areas.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 717
The rejection of the null hypothesis one (Ho1) demonstrated a significant difference in awareness between students who have encountered AI
applications (X̄ = 3.29) and those who have not (X̄ = 2.65). This finding emphasizes the profound impact of direct experience on ethical
literacy. Students who actively use AI tools move beyond abstract concepts to confronting real-world ethical dilemmas, such as the quality of
generative AI outputs or the necessity of clear citation, which naturally heightens their ethical sensitivity. This finding is crucial for policy
formulation, suggesting that institutions should intentionally integrate AI applications into coursework to enhance ethical awareness as a
primary learning outcome [13], [21].
The statistically significant difference in the prioritization of ethical concerns (Friedman Test, X2 = 12.34, p < 0.05; and Independent Samples
t-test value 4.10, p < 0.05) and the overwhelming dominance of Data Privacy provide crucial insights into the user-centric ethical view within
Nigerian higher education. The rejection of the null hypothesis two (Ho2) also indicates a significant difference in the overall prioritization of
ethical concerns between the two groups. Students with AI encounter (X̄= 3.33) showed a higher overall prioritization score than the non-
encountered group (X̄ = 2.89). It confirms that ethical concerns are not viewed equally by postgraduate students; instead, they are ranked
based on their immediate perceived risk, directly challenging the notion that broad-stroke ethical policies are sufficient. This goes hand-in-
hand with the awareness finding, which confirms that greater exposure not only makes students aware of an issue but also makes them consider
it a higher priority for institutional action [22], [23]. This result emphasizes that ethical training must be coupled with practical, hands-on
engagement with AI tools to shift attitudes from passive acknowledgment to active demand for ethical governance [24], [25]. It also confirms
that practical experience is a critical factor in shaping ethical literacy and demand for governance.
Data Privacy as the Dominant Ethical Imperative
Data privacy emerged as the most highly prioritized concern (WMS = 3.55), followed by Transparency (WMS = 3.38). Algorithmic
Bias/Fairness (3rd Rank with WMS = 3.12) and Accountability (4th Rank with WMS = 3.01) followed, respectively.
The result showing data privacy as the overwhelmingly dominant concern, evidenced by its Weighted Mean Score (WMS = 3.55) and first-
place ranking, suggests a deep-seated trust deficit among postgraduate students regarding institutional data stewardship. It supports the
theoretical expectation of the study that the concrete and individual risks associated with data privacy breaches are prioritized over other, more
abstract, ethical concerns. This is understandable, as privacy breaches present immediate, tangible, and personal risks to the student, such as
identity theft or misuse of personal learning data [26]. This result aligns with the principle of protection and accessibility, where students value
access but demand robust security [13].
This finding strongly resonates with the Pillar I (Data Protection and Privacy) of the Ethical Data Governance Framework (EDGF), which
mandates that institutions prioritize rights-respecting data management. In the context of Nigerian Higher Education, where students may be
required to submit extensive personal, academic, and financial information to utilize AI-driven systems, this dominant prioritization reflects
widespread anxiety over the security and potential misuse of their academic records [13]. This is not an abstract fear but a practical concern
rooted in the perceived vulnerability of sensitive information in a digital environment perceived to lack robust, enforceable safeguards [27].
The dominance of this concern strongly underscores a need for immediate institutional action to rebuild student trust, moving beyond mere
regulatory compliance to genuine ethical stewardship [14], [28].
The Demand for Transparency and Explainable AI (XAI)
The finding that Transparency emerged as the second-highest ranked concern (WMS = 3.38) is directly linked to the institutional
failure to implement Explainable AI (XAI) mechanisms. Transparency is a cornerstone of Pillar II (Algorithmic Accountability and
Transparency) of the EDGF. Students are increasingly aware that AI systems make consequential decisions—from grading and
plagiarism detection to administrative approvals—yet they often operate as "black boxes" [29], [30]. This opaqueness is a
fundamental violation of the transparency requirements under Pillar II [16], [31].
The high prioritization of transparency reflects a desire for visibility into how AI systems make decisions in assessment and personalized
learning, directly linking to the need for clear interoperability standards in data processing [13], [32]. This lack of explainability drives up
student anxiety, directly influencing their demand for greater transparency. Mitigation strategies must therefore move beyond simple policy
statements. Institutions must commit to implementing technologies and procedures that allow for algorithmic audibility, for example, by
utilizing XAI tools such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) in pilot AI
systems. While LIME provides explanations for individual predictions, and SHAP offers consistency in attributing features to predictions,
either approach would enable postgraduate students to see why an AI system reached a specific conclusion about their performance or
application. Furthermore, a transparent Vendor Clause must be integrated into all AI procurement contracts, mandating that educational
technology providers supply auditable logs and clear documentation on model logic, thereby embedding accountability into the system design
rather than adding it as an afterthought.
Algorithmic Bias and Accountability: Mitigation and Procedural Gaps
Algorithmic Bias and Accountability ranked lower than Data Privacy and Transparency, yet their significance cannot be understated,
particularly regarding the need for robust procedural mitigation. The lower ranking may reflect the fact that bias, unlike a data breach, is often
a more subtle and less immediately perceptible harm to the individual user, or its abstract nature, making it difficult for students to connect
systemic failure to a specific institutional or personal consequence.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 718
The EDGF demands a proactive approach to bias mitigation. For instance, to address potential bias arising from unrepresentative training
data—a risk amplified in diverse settings [33], [34]—universities should implement Data Impact Assessments (DPIA) before deploying any
new AI tool. A DPIA ensures that the training datasets reflect the demographic, academic, and socioeconomic diversity of the Nigerian
postgraduate student population, moving beyond general data practices to context-specific equity checks. Similarly, Accountability requires
a clear designation of human oversight. The EDGF suggests establishing an AI Ethics Review Board composed of faculty, administrators,
and students (as guided by Pillar III: Stakeholder Engagement) that is empowered to halt the use of an AI system if bias or a lack of
transparency is identified [22], thus establishing a tangible human layer of accountability for algorithmic decisions.
VI. Conclusion
This study investigated the prioritization of AI ethical concerns among postgraduate students in Nigeria, utilizing a broadened and balanced
sample across Federal and State universities (NOUN, UI, EKSU, LAUTECH). The study resolves that postgraduate students possess a high
level of awareness regarding the ethical implications of AI, with this awareness significantly enhanced by direct encounter with AI
applications. The use of the Friedman Test confirmed a statistically significant difference in prioritization, leading to the rejection of the null
hypothesis. The findings establish a clear risk hierarchy, with Data Privacy being the dominant ethical concern, driven by a deep-seated trust
deficit in institutional data stewardship. This is followed by Transparency, indicating a strong student demand for Explainable AI (XAI). This
prioritization emphasizes the need for Nigerian Higher Education Institutions to shift from generic ethical discourse to implementing robust,
user-centric data governance policies anchored in the Ethical Data Governance Framework (EDGF).
The empirical data provide a clear mandate for institutional focus on data privacy, which dominates the prioritization hierarchy, followed by
the demand for algorithmic transparency. Concurrently, strict transparency standards must be enforced for all AI systems used in evaluation
and administrative processes, ensuring students have access to explanations of algorithmic decisions. These findings confirm the necessity of
implementing robust data protection mechanisms and clear, auditable processes to secure student trust and facilitate the ethical adoption of AI
in Nigerian higher education settings. The university should further invest in an accountability infrastructure to oversee AI deployment and
integrate mandatory, hands-on, experiential ethical training into the curriculum to move concerns like bias and accountability higher up the
students' prioritization scale, thereby enhancing overall ethical preparedness.
References
1. A. P. Binitie, S. I. Onyemenem, and F. J. Okoh, “Enhancing Security and Privacy in EdTech Tools: Safeguarding Student Data in
the Digital Learning Era,” J. Sci. Innov. Technol. Res., 2025, doi: 10.70382/ajsitr.v9i9.041.
2. M. M. Ncube and P. Ngulube, “A Systematic Review of Postgraduate Programmes Concerning Ethical Imperatives of Data Privacy
in Sustainable Educational Data Analytics,” Sustainability, vol. 16, no. 15, p. 6377, July 2024, doi: 10.3390/su16156377.
3. G. N. Sreejaa, V. Jayanthi, and V. G. Ranimol, “Exploring Ethical Concerns Among Students: The Impact of AI Usage in
Education,” Int. Sci. J. Eng. Manag., vol. 04, no. 07, pp. 1–9, July 2025, doi: 10.55041/ISJEM04845.
4. C. Fang and A. W. C. Tse, “Case Study: Postgraduate Students’ Class Engagement in Various Online Learning Contexts When
Taking Privacy Issues to Incorporate with Artificial Intelligence Applications,” Int. J. Learn. Teach., vol. 9, no. 2, 2023, doi:
10.18178/ijlt.9.2.90-95.
5. A. Weber, “ETHICS CONCERNS IN ARTIFICIAL INTELLIGENCE USE IN EDUCATION,” presented at the 14th
International Technology, Education and Development Conference, Valencia, Spain, Mar. 2020, pp. 4539–4544. doi:
10.21125/inted.2020.1262.
6. A. Luwoye, A. T. Ahmed, O. I. Yusuf, and J. A. Abdulraheem, “CHALLENGES AND OPPORTUNITIES OF ARTIFICIAL
INTELLIGENCE INTEGRATION INTO NIGERIAN UNIVERSITY SYSTEM,” Int. J. Artif. Intell. Digit. Mark., vol. 1, no. 5,
pp. 48–54, Dec. 2024, doi: 10.61796/ijaifd.v1i5.267.
7. L. Huang, “Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection,” Sci. Insights Educ. Front., vol. 16,
no. 2, pp. 2577–2587, June 2023, doi: 10.15354/sief.23.re202.
8. P. Mishara, “The Ethical Implications of AI in Education: Privacy, Bias, and Accountability,” J. Inform. Educ. Res., vol. 4, no. 2,
Dec. 2024, doi: 10.52783/jier.v4i2.1827.
9. B. C. Stahl, “The Ethics of Data and Its Governance: A Discourse Theoretical Approach,” Information, vol. 16, no. 6, p. 497, 2025,
doi: 10.3390/info16060497.
10. K. D. Schubert and D. Barrett, “Data Governance, Privacy, and Ethics,” in Human Privacy in Virtual and Physical Worlds, M. C.
Lacity and L. Coon, Eds., in Technology, Work and Globalization. , Cham: Springer Nature Switzerland, 2024, pp. 87–110. doi:
10.1007/978-3-031-51063-2_5.
11. C. Knopp, “Privacy Perception in Developing Countries,” 2019, doi: 10.13140/RG.2.2.19821.95209.
12. M. Mutuku, “Legal and Ethical Implications of Data Privacy in Artificial Intelligence: A Review of Data Privacy among Learners
in Kenyan Secondary Schools,” Int. J. Innov. Sci. Res. Technol. IJISRT, pp. 537–540, Sept. 2024, doi:
10.38124/ijisrt/IJISRT24SEP208.
13. A. Afolabi, “Ethical Issues in Artificial Intelligence Adoption in African Higher Education Institutions in Nigeria,” Afr. J. Inf.
Knowl. Manag., vol. 3, no. 2, pp. 22–33, July 2024, doi: 10.47604/ajikm.2735.
14. A. D. Ochigbo, A. Tuboalabo, T. T. Labake, and O. Layode, “Regulatory compliance in the age of data privacy: A comparative
study of the Nigerian and U.S. legal landscapes,” Int. J. Appl. Res. Soc. Sci., vol. 6, no. 7, pp. 1355–1370, July 2024, doi:
10.51594/ijarss.v6i7.1297.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIV, Issue X, October 2025
www.ijltemas.in Page 719
15. Francisco Bellas et al., “Explainable AI in education: Fostering human oversight and shared responsibility,” European Digital
Education Hub, 2025. Accessed: Oct. 23, 2025. [Online]. Available: https://www.daad-brussels.eu/files/2025/06/EDEH_XAI-
Report-1.pdf
16. H. Khosravi et al., “Explainable Artificial Intelligence in education,” Comput. Educ. Artif. Intell., vol. 3, p. 100074, Jan. 2022, doi:
10.1016/j.caeai.2022.100074.
17. M. Irfan, F. Aldulaylan, and Y. Alqahtani, “Ethics and Privacy in Irish Higher Education: A Comprehensive Study of Artificial
Intelligence (AI) Tools Implementation at University of Limerick,” Glob. Soc. Sci. Rev., vol. VIII, no. II, pp. 201–210, June 2023,
doi: 10.31703/gssr.2023(VIII-II).19.
18. J. Stone and L. Wang, “Institutional accountability gaps: AI decision-making and the challenge of redress in higher education,”
Educ. Technol. Res. Dev., vol. 70, no. 4, pp. 1133–1150, 2022.
19. S. Taylor and E. Smith, Ethical Data Governance: A Policy Handbook for Universities. Global Education Press, 2024.
20. U. Ogwo, R. C. Ojobor, and E. T. Babarinde, “Awareness and utilization of emerging technologies for effective services in selected
federal university libraries in Southeastern Nigeria,” J. Libr. Serv. Technol., vol. 7, no. 1, pp. 81–94, 2025, doi: 10.47524/jlst.v7i1.82.
21. A. Inusa Busa, A. Suleiman, and A. Mahmud, “Integrating Artificial Intelligence into Nigeria’s National Policy on Education: A
Strategic Framework for the 21 st Century,” Int. J. Res. Publ. Rev., vol. 6, pp. 5999–6006, July 2025.
22. S. Sangwa, D. Ngobi, E. Ekosse, and M. Placide, “AI governance in African higher education: Status, challenges, and a future-
proof policy framework,” Artif. Intell. Educ., p. 2054, Aug. 2025, doi: 10.62617/aie2054.
23. C. C. Umoke, S. O. Nwangbo, and O. A. Onwe, “The Governance of AI in Education: Developing Ethical Policy Frameworks for
Adaptive Learning Technologies,” IJASMT, Apr. 2025, doi: 10.56201/ijasmt.vol.11.no2.2025.pg71.88.
24. R. Priya, A. Rinki, and Assistant Professor, Department of Teacher Education, Central University of South Bihar., “HANDS ON
ACTIVITIES: ROLE OF METHODS OF TEACHING AS MEANS OF INTERDISCIPLINARY EDUCATION,” Int. J. Adv.
Res., vol. 9, no. 08, pp. 454–460, Aug. 2021, doi: 10.21474/IJAR01/13288.
25. H. Mfaume, “Integrating Ethical Literacy Regarding the Use of Emerging Pedagogical Technologies into Initial Teacher Education
Programme: Educational Stakeholders’ Perspectives in Tanzania,” Int. J. Educ. Dev. Using Inf. Commun. Technol., vol. 19, pp.
146–163, 2023.
26. Y. N. Harari, The AI threat to privacy and human agency. The Economist, 2023. Accessed: Oct. 17, 2025. [Online]. Available:
https://www.economist.com/films/2023/09/14/yuval-noah-harari-and-mustafa-suleyman-on-the-future-of-ai
27. Anthony Etta Bisong, Imoke John Eteng, and Imoke John Mary, “A SYSTEMATIC REVIEW OF DATA PRIVACY AND
SECURITY IN NIGERIA’S ONLINE EDUCATION: ETHICAL CONSIDERATION AND CHALLENGES,” Inter-Discip. J.
Sci. Educ., vol. 5, no. 1, pp. 98–112, 2023.
28. S. Adedokun and T. Adedokun, “Navigating Ethical Long-Term Personal Data Storage: Privacy, Security, Regulatory Challenges,
and Societal Impact Review,” J. Techno Nusa Mandiri, vol. 22, no. 2, pp. 201–210, Oct. 2025, doi: 10.33480/techno.v20i2.5549.
29. D. Krpan and S. Mladenović, “Unveiling the Black Box: Understanding AI in Education,” European Digital Education Hub, 2025.
Accessed: Oct. 23, 2025. [Online]. Available: https://www.daad-brussels.eu/files/2025/04/Understanding-AI-in-Education.pdf
30. M. K. Pasupuleti, “Auditing Black-Box AI Systems Using Counterfactual Explanations,” Int. J. Acad. Ind. Res. Innov., vol. 5, no.
5, pp. 598–608, May 2025, doi: 10.62311/nesx/rphcr20.
31. H. Blake, “Algorithmic Accountability: Establishing Frameworks for Transparency and Responsibility in AI-driven Decisions,”
Oct. 2024, [Online]. Available:
https://www.researchgate.net/publication/387381860_Algorithmic_Accountability_Establishing_Frameworks_for_Transparency_
and_Responsibility_in_AI-driven_Decisions
32. S. A. Adedokun, S. O. Aderibigbe, R. F. Famutimi, O. K. Ogunniyi, and D. A. Adedokun, “Development of A Framework Using
Findable, Accessible, Interoperable, and Reusable Model for Data Management Practices in Nigerian Universities,” Iconic Res.
Eng. J., vol. 9, no. 4, pp. 124–133, Oct. 2025, doi: 10.64388/IREV9I4-1711067-7610.
33. M. G. Hanna et al., “Ethical and Bias Considerations in Artificial Intelligence/Machine Learning,” Mod. Pathol., vol. 38, no. 3, p.
100686, Mar. 2025, doi: 10.1016/j.modpat.2024.100686.
34. K. S. Chadha, “Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies,” Int. J. Res. Publ. Semin., vol. 15,
no. 3, pp. 36–49, July 2024, doi: 10.36676/jrps.v15.i3.1425.