Tool and Tutor? Experimental evidence from AI deployment in cancer diagnosis
Vivianna Fang He, Sihan Li, Phanish Puranam, Feng Lin
If you're deploying AI assistants in high-stakes domains, measure skill transfer explicitly. Don't assume performance gains during AI use translate to capability after removal. Design for learning, not just augmentation.
AI diagnostic tools let novices perform like experts immediately, but reliance on AI for task completion blocks the learning needed to develop actual expertise.
Method: Researchers deployed AI-based cancer diagnostic tools to medical novices in a controlled experiment, measuring both immediate diagnostic performance and skill retention after AI removal. The study isolates whether AI acts purely as a performance crutch or builds transferable diagnostic capability. They tracked diagnostic accuracy with and without AI assistance across multiple sessions, creating a learning curve analysis that separates tool-dependent performance from internalized skill acquisition.
Caveats: Study focuses on cancer diagnosis; skill transfer dynamics may differ in domains with faster feedback loops or lower error costs.
Reflections: What interaction patterns during AI-assisted diagnosis predict better skill retention? · Can interface design choices (e.g., explanation timing, confidence displays) improve learning outcomes without sacrificing immediate performance? · How long does skill retention persist after AI removal in different medical specialties?