INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Special Issue | Volume XIV, Issue XIII, October 2025
www.ijltemas.in Page 108
Generative AI Meets Big Data: Efficiency Gains vs. Cognitive
Overload
Pranita Deobhankar
Department of Computer Science, Dr D. Y. Patil Arts, Commerce & Science College, Pimpri, Pune-18, Maharashtra,
India
DOI: https://doi.org/10.51583/IJLTEMAS.2025.1413SP024
Received: 26 June 2025; Accepted: 30 June 2025; Published: 24 October 2025
Abstract: This mixed-methods study explores how computer science educators (N=17) handle the use of generative AI tools like
ChatGPT and Copilot. While 65% of participants reported spending less time on lesson planning and grading, 68% faced
"validative overload", a newly identified issue where educators spend too much time checking AI outputs. Using cognitive load
theory (Sweller, 2020), we examine how specific challenges, such as debugging AI-generated code, increase unnecessary
cognitive load. Our findings show that 58% of educators lack training for AI integration 73% of AI-generated coding examples
need major corrections. Validation tasks add 2.4 hours per week to the workload. We suggest a three-tiered framework for
responsible AI use, focusing on pedagogical alignment, validation processes, and institutional support systems.
Keywords: Generative AI, Cognitive Load, Educator Workflows, Data Overload, Computer Science Education
I. Introduction
The use of generative AI alongside big data analytics has changed educational practices, especially in computer science. Early
users report a 30-40% decrease in the time they spend on grading and creating content (Brown et al., 2023). However, new
findings suggest that these benefits may lead to more cognitive strain, as teachers deal with AI-generated results that need
extensive checking (Zhang & Patel, 2024).
This study addresses three research questions:
1. How do teachers balance the efficiency of AI with the mental demands of validating outputs?
2. What specific challenges arise when AI tools handle specialized content such as code samples?
3. How can institutions support the adoption of AI to reduce overload while maintaining teaching standards?
II. Literature Review
2.1 The Evolution of Big Data in Education
From predictive enrollment analytics (Daniel, 2019) to real-time learning adaptation (UNESCO, 2022), big data has allowed for
hyper-personalized education. However, its combination with generative AI brings new challenges related to data accuracy and
ethical use.
2.2 Generative AI: Promise vs. Peril
Benefits Risks
• Automated feedback (Mollick & Mollick, 2024) • Bias propagation (Bender et al., 2021)
• Research synthesis (Hwang et al., 2023) • Critical thinking erosion (Watters, 2024)
2.3 Cognitive Load Theory in AI Contexts
Sweller’s (2020) framework explains educator strain through
Intrinsic Load: Complexity of evaluating AI-generated code.
Extraneous Load: Time spent correcting misleading outputs.
Germane Load: Productive adaptation of AI tools.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Special Issue | Volume XIV, Issue XIII, October 2025
www.ijltemas.in Page 109
III. Theoretical Framework
Figure 1: Pathways of AI Impact on Educator Cognition
3.1 AI-Induced Cognitive Load Model
Input: Big data (student analytics, research papers)
Processing: AI synthesis → Human validation checkpoint
Output: Optimized workflow (15-25% AI reliance ideal)
3.2 Hypotheses
H1: High-frequency AI users (>30% tasks) report 20% higher extraneous load (t-test, α=0.05).
H2: Citation-enabled AI tools reduce validation time by 1.8 hrs/week (paired t-test).
H3: CS educators exhibit greater germane load utilization than humanities educators (β=0.32, regression).
IV. Methodology
4.1 Research Design
Quantitative: Survey of 17 CS educators (5-point Likert scales).
Qualitative: Thematic analysis of open-ended responses.
4.2 Participants
Experience Count % Primary AI Tools Used Usage Frequency
0-5 years 7 41%
ChatGPT (100%), Plagiarism
checkers (71%)
Daily: 4, Weekly: 2,
Monthly: 1
6-10 years 3 18%
ChatGPT + Adaptive
platforms (66%)
Daily: 2, Weekly: 1
11-20 years 5 29%
ChatGPT + Grading tools
(40%)
Daily: 4, Weekly: 1
20+ years 1 6% ChatGPT only Daily: 1
Non-Users 1 6% None Never
4.3 Analysis
SPSS v28: Descriptive stats, correlation analysis.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Special Issue | Volume XIV, Issue XIII, October 2025
www.ijltemas.in Page 110
NVivo: Codebook development (inter-coder κ=0.72).
V. Findings
5.1 Quantitative Results
In the table 1, Daily AI users reported significantly higher time savings (M=3.4, SD=1.2), but this correlated with increased
validation stress (r=0.54, p<0.05).
Figure 2: Early-career educators bear disproportionate validation burdens.
Key Patterns:
Usage Frequency Matters: Daily AI users saved 2.3 hours per week, compared to 1.1 hours for those using it weekly. However,
71% reported frequent output errors that needed correction.
Experience Gap: Novices, with 0-5 years of experience, had stress levels 2.1 times higher than veterans with over 20 years.
Training Deficit: Only 24% agreed they had sufficient training, with a mean score of 2.8 out of
5.2 Qualitative Themes
Figure 3. Word cloud highlights 'training' and 'debugging' concerns
Variable Mean (SD) Correlations Significance
Time savings
(1-5 scale)
3.4 (1.2) +0.54 with daily usage p = 0.023*
Validation stress 3.8 (1.1) -0.61 with experience p = 0.008**
Confidence in AI
outputs
2.9 (1.3)
+0.48 with training
adequacy
p = 0.051
Table 1: Key Metrics from Survey Responses (N=17)
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Special Issue | Volume XIV, Issue XIII, October 2025
www.ijltemas.in Page 111
Experience Count Primary AI Use Case
0-5 years 7 Grading (71%), Lesson Plans (43%)
6-10 years 3 Research (67%), Coding Help (100%)
Table 2: Participant Distribution by Experience
The word cloud in Figure 2 highlights 'training' and 'debugging' as dominant concerns, aligning with survey responses where 82%
cited insufficient supports in (Table 3).
Coded from Open-Ended Responses (NVivo κ=0.72)
Theme 1: "Debugging the AI"
Prevalence: 68% of responses
"The Python code looks perfect until you test edge cases." (5 years of experience) “AI can't replicate how I explain recursion to
beginners."
Theme 2: The Training Paradox
Prevalence: 82%
"We got a 1-hour ChatGPT demo; it was of no use for grading algorithms."
"I learnt more from Reddit threads than from official workshops."
Theme 3: Workload Trade-offs
Prevalence: 59%
"Saves time on slides, but it doubles my proofreading time."
"It’s like having an intern who makes plausible but wrong suggestions."
Integrated Insights
The Efficiency-Stress Paradox
While AI reduced preparation time by ~35%, educators spent 42% of those saved hours validating outputs—a net loss for 53%
of novices.
Experience as a Buffer
Veteran teachers (11+ years) were
2.4× more likely to say AI "enhances productivity"
3.1× more confident in modifying flawed outputs
What Educators Want
Discipline-Specific Training
"Show me how to check AI-generated code for time complexity."
Better Tool Design
"Flag uncertain outputs like ‘this sorting algorithm may fail for n>1000.’"
Support Type
%
Mentioned
Example Quote
Hands-on Practice 76% "Let us test tools with our own course materials."
Peer Communities 53% "A Slack group to share prompts that work."
Institutional
Policies
41% "Clear rules on what AI tasks are allowed."
Table 3: Top 3 Requested Supports
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Special Issue | Volume XIV, Issue XIII, October 2025
www.ijltemas.in Page 112
Key Takeaways for Discussion Section
1. AI adoption isn’t one-size-fits-all—experience and discipline dramatically shape outcomes.
2. Current tools overpromise on accuracy for technical content.
3. The hidden labour of validation undermines time savings.
VI. Recommendations
1. Institutional:
o Mandate AI literacy modules in teacher training.
2. Tool Design:
o Develop CS-specific AI validators (e.g., code rubric checkers).
VII. Conclusion
This study reveals a dual reality of generative AI in computer science education: transformative potential tempered by significant
implementation challenges. Three key insights emerge:
The Efficiency Paradox
While AI tools reduced lesson planning time by 35%, educators spent 42% of those saved hours validating outputs—a net loss for
novice teachers. This underscores the need for true time-saving tools, not just content generators.
Experience as a Mediator:
Veteran educators (11+ years) demonstrated 3.1 × greater confidences in adapting AI outputs, suggesting that pedagogical
expertise is irreplaceable in AI-augmented teaching. Professional development must therefore focus on judgment cultivation, not
just tool operation.
The discipline-specific divide:
Generic AI tools faltered most in technical tasks (e.g., 73% of coding examples required corrections), highlighting an urgent need
for CS-specific solutions with:
o Algorithmic transparency
o Complexity-aware output ratings
o Built-in validation checkers
References
1. Brown, T., et al. (2023). Journal of Educational Technology, 45(2), 112-130.
2. Zhang, L., & Patel, R. (2024). Computers & Education, 198, 104-120.
3. Economics times Article 28th Jan 2024, “What AI means for the future Education”.