Purpose: The purpose of this study is to examine how AI-enabled training outcomes evolve as an intervention transition from a supervised doctoral pilot to a scaled, longitudinal organizational deployment. The study focuses on learning adoption and learning efficiency in a frontline hospitality context, while explicitly examining the role of AI micro-agents operating within a human-in-the-loop governance framework.
Methods: The study adopts a longitudinal cohort extension design, building on a doctoral pilot conducted with 100 frontline employees during 2024 and extending into a scaled operational deployment during 2025. Objective learning-platform trace data from an AI-enabled training system were analyzed across two deployment phases. Learning adoption was measured using exposure-adjusted completion rates, while learning efficiency was assessed using assessment performance and time-on-task metrics. The analysis controls for workforce churn characteristic of frontline service environments.
Findings: Results show that completion rates normalized from 100% in the pilot phase to 86.82% under real-world scale, reflecting operational normalization rather than reduced effectiveness. Importantly, learning quality and efficiency improved over time: mean assessment scores increased, while average time-on-task declined significantly. These findings indicate faster mastery and deeper learning as AI-enabled training matured, rather than superficial compliance.
Implications: The findings demonstrate that AI-enabled training systems can sustain adoption and improve learning efficiency at scale when designed with constrained agency and supported by human oversight. For organizations in high-churn frontline environments, the results emphasize the importance of evaluating training effectiveness beyond pilot completion metrics and focusing on longitudinal learning quality, efficiency, and governance structures.
Originality: This study provides rare longitudinal, post-dissertation evidence on AI-enabled training effectiveness, directly linking a doctoral pilot to scaled organizational deployment. It advances technology management and digital learning research by introducing a churn-aware evaluation framework and empirically demonstrating how AI micro-agents, operating within human-in-the-loop governance, shape sustainable learning outcomes beyond pilot conditions.
Smrite Goudhaman. Strategic Human Oversight Frameworks for AI-Enabled Training Microagents: Evidence from a Longitudinal Adoption Study.
. 2025, 16, 87-102