Study Notes on the Effects of Generative Artificial Intelligence on Learning Motivation, Processes, and Performance
ORIGINAL ARTICLE
Title: Beware of Metacognitive Laziness: Effects of Generative Artificial Intelligence on Learning Motivation, Processes, and Performance
Authors: Yizhou Fan, Luzhen Tang, Huixiao Le, Kejie Shen, Shufang Tan, Yueying Zhao, Yuan Shen, Xinyu Li, Dragan Gasevic
Affiliations:
Graduate School of Education, Peking University, Beijing, 100871, China
Centre for Learning Analytics, Faculty of Information Technology, Monash University, Clayton, Victoria 3800, Australia
Zhejiang Lab, Hangzhou, Zhejiang, 311121, China
Correspondence:
Graduate School of Education, Peking University, Beijing, 100871, China
Email: fyz@pku.edu.cn
Funding information:
National Natural Science Foundation of China, Grant/Award Number: 62407001
Society for Learning Analytics Research (ECR Research Grant), Grant/Award Number: 2023
Background
The continuous development of technological and educational innovations allows learners to receive extensive support from various agents, such as instructors, peers, education technologies, and generative artificial intelligence (AI) tools like ChatGPT.
There is a significant rise in academic interest surrounding human-AI collaboration and hybrid intelligence within educational frameworks.
Objectives
The concept of hybrid intelligence is still in early stages, with unknown benefits for learners in collaborating with AI, human experts, and intelligent learning systems.
Current understanding lacks depth regarding mechanisms and consequences tied to hybrid human-AI learning, necessitating rigorous empirical research.
Methods
Conducted a randomized experimental study comparing learners' motivations, self-regulation processes, and performances using various support types:
AI (ChatGPT)
Human expert consultations
Writing analytics tools
Control group (no additional tools)
Participants: 117 university students; data was collected on multi-channel learning and analyzed.
Results
Intrinsic Motivation:
No noticeable differences in post-task intrinsic motivation across groups.
Self-Regulated Learning (SRL) Processes:
Significant differences in frequency and sequences of SRL processes among groups.
Performance:
ChatGPT group showed superior improvement in essay scores; however, knowledge gain and transfer were not significantly different across groups.
Conclusions
Absence of differences in motivation does not equate to uniformity in learning processes which resulted in varied performances.
Notably, AI tools like ChatGPT may contribute to learners' dependence on technology, potentially inducing a state of metacognitive "laziness".
Future educational strategies must consider the roles of different agents to enhance the overall understanding of hybrid intelligence dynamics.
Keywords
ChatGPT; Generative AI; Hybrid Intelligence; Learning Analytics; Experimental Study
Practitioner Notes
What is already known:
Hybrid intelligence aims to complement human capabilities with AI, facilitating better collaboration for lifelong learning.
Generative AI has shown potential in enhancing learning through immediate feedback and personalized experiences, though its effectiveness remains mixed, depending on the context.
What this paper adds:
A comprehensive analysis from a randomized experimental study contrasting learners’ motivational aspects and learning processes across different agent supports.
The identification of potential dependence on AI like ChatGPT and its implications for metacognitive engagement within learning environments.
Insight into short-term performance improvements without an associated increase in intrinsic motivation or nuanced understanding of knowledge transfer.
Implications for Practice/Policy:
Encourage learners to engage deeply with knowledge rather than merely relying on AI feedback for efficient task completion.
Educators should carefully select suitable tasks for AI implementation while fostering intrinsic motivation and scaffolding active learning practices.
Future investigations should explore ethical and effective practices for learners to manage collaboration with AI agents in educational scopes.
Introduction
The 21st century heralded rapid technological advancements where AI serves as a transformative force across diverse sectors, necessitating cooperative learning approaches involving AI.
Lifelong learning becomes imperative as individuals benefit from continuous knowledge acquisition involving a symbiotic interface with machine intelligence.
Concept of Hybrid Intelligence
Hybrid intelligence combines human and machine capabilities, enhancing problem-solving capacities unreachable by either entity alone.
Literature reveals that while this concept shows promise, empirical understanding requires further examination of mechanisms at play in human-AI interactions.
Akata et al. (2020) define hybrid intelligence as “a combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them.”
Self-Regulated Learning (SRL) Framework
SRL involves cognitive, metacognitive, and motivational processes in learning, defined by Zimmerman (2000) as self-generated thoughts and behaviors toward achieving personal educational goals.
The three-phase SRL model consists of:
Forethought: Task analysis, goal-setting, and strategic planning underpinned by motivation.
Performance: Task execution, progress monitoring, and self-control application to sustain focus.
Self-Reflection: Performance assessment and future strategic adjustments based on outcomes.
Metacognition, addressed by Flavell (1979), is integral within SRL and defined as “thinking about thinking.”
Factors Impacting SRL Processes
Learners face various barriers in self-regulating, including inadequate metacognitive skills, low motivation, and task complexity.
External support from educational agents (teachers, peers, technology) is vital for bolstering self-regulation and ensuring effective learning.
AI's Role in Education
The advent of AI in education has been both beneficial and problematic, with conversational technologies like ChatGPT gaining traction for their efficacy in generating coherent responses.
However, concerns arise about learners' over-reliance on AI-generated content, which might lead to cognitive offloading—a reduction in cognitive engagement and metacognitive abilities crucial for deeper learning.
Cognitive offloading concept, proposed by Risko and Gilbert (2016), emphasizes the delegation of cognitive tasks to reduce effort but potentially decreases engagement and self-regulation capacity over time.
Research Questions
To address existing gaps within hybrid intelligence literature, this study poses three primary inquiries:
RQ1: How does interaction with AI and other support agents influence learners’ intrinsic motivation?
RQ2: To what extent do learners engage in different self-regulated learning processes when using varied support agents?
RQ3: Are there differences in task performance, knowledge gain, and transfer among learners supported by different agents?
Methods
Experimental Design
Participants: 117 university students were engaged in a structured English reading and writing task split across two stages with random assignment to four experimental conditions:
CN Group (Control, 30 Participants): No additional support.
AI Group (35 participants): Support from ChatGPT 4.0.
HE Group (25 participants): Support from a human academic expert.
CL Group (27 participants): Support using writing analytics tools (Checklist).
Research Procedure
Participants underwent a comprehensive task consisting of pre-task, training, reading, writing, revising, and post-task phases.
Multiple measures, including motivation assessments (via Intrinsic Motivation Inventory), learning trace data, essay scores, and knowledge tests, were captured for robust analysis.
Results and Discussion
RQ1: Intrinsic Motivation
Overall motivation scores revealed no significant differences across conditions (F=1.087, p=0.358), with the control group showing distinct patterns of lower interest and higher pressure.
RQ2: SRL Process Variability
Significant differences noted in frequency and type of SRL processes during the revision phase across groups, indicating a trend toward enhanced engagement with elaboration, organization, and evaluation processes amongst AI and HE participants.
RQ3: Learning Performance
AI Group exhibited greater essay improvement scores post-intervention compared to other groups (F=4.549, p=0.005), yet knowledge gain and transfer scores remained statistically similar across conditions, sparking questions about the efficiency of immediate performance against broader educational outcomes.
Limitations
Sample size constraints and task specificity may limit the applicability of findings. Future studies should strive for diverse sample sizes and investigate varied task types across contexts to elucidate the implications of AI on motivation and cognitive engagement on longer time scales.
Conclusion
The findings reveal a dichotomy within AI's effects on immediate task performance versus long-term learner engagement and intrinsic motivation.
The pursuit of hybrid intelligence should leverage strengths of both human and AI intelligences while avoiding pitfalls of over-reliance and metacognitive laziness.
References
A comprehensive list of references, including foundational and recent studies related to self-regulated learning, metacognition, AI in education, and motivation, follows here, detailing various scholars and findings within the broader context of educational technologies and cognitive psychology.