Users Mispredict Their Own Preferences for AI Writing Assistance
Vivian Lai, Zana Buçinca, Nil-Jana Akpinar, Mo Houtti, Hyeonsu B. Kang, Kevin Chian, Namjoon Suh, Alex C. Williams
Stop asking users when they want AI help—they can't tell you. Instrument compositional effort instead: track typing velocity, deletion rates, and revision cycles. Train your intervention triggers on behavioral signals, not stated preferences.
Users claim urgency drives their need for AI writing help, but their actual behavior tells a different story—creating a design paradox for proactive assistants.
Method: A factorial vignette study with 750 pairwise comparisons reveals compositional effort dominates user decisions with a correlation of ρ=0.597, while urgency—the factor users self-report as most important—shows zero predictive power (ρ≈0). The perception-behavior gap is stark: users rank urgency first in surveys despite it being the weakest behavioral driver. This means self-reported preferences are worse than useless for training proactive AI.
Caveats: Study used vignettes, not live writing sessions. Real-time compositional effort detection remains an unsolved instrumentation problem.
Reflections: Can real-time compositional effort be reliably detected from keystroke dynamics alone? · Do perception-behavior gaps persist after users experience multiple AI interventions? · What other domains exhibit similar misalignment between stated and revealed preferences for AI assistance?