hci index
curated weekly summaries of the latest HCI papers from arXiv — concise synopses, actionable insights, and critical takeaways to keep researchers, designers, and practitioners informed
2026
2025
OCTOBER(478 papers)
W43AI Systems Ship Faster Than Institutions Can Absorb Them
From healthcare to education, deployment reality collides with governance capacityW42Systems Work Until Users Need to Verify Them
From AI agents to data visualizations, this week's research exposes the verification gapW41Users Want Control, Not Automation
AI systems personalize faster than people can calibrate trust, and intermediate autonomy winsW40AI Systems Become Infrastructure Before Users Learn When to Distrust Them
From code completion to healthcare triage, automation outpaces calibration across domainsSEPTEMBER(578 papers)
W39Platform Signals Betray Users Faster Than Users Betray Themselves
Ranking, ads, and moderation tools shape behavior through channels users don't recognize as revealingW38Automation Erodes Agency While Claiming to Empower It
From memory failures to override rights, rigorous studies expose the control gap in AI-assisted systemsW37Proactive AI Systems Trigger Self-Threat, Not Trust Issues
When unsolicited help backfires, plus XR privacy gaps and the cost of ambient intelligenceW36AI Assistance Erodes the Autonomy It Promises to Augment
Rigorous studies show delegation reshapes human judgment, flattens choices, and reduces cognitive engagementAUGUST(542 papers)
W35AI Systems Demand New Verification Labor
From clinical safety to creative workflows, human judgment becomes the bottleneckW34AI Assistance Breaks Down When Users Can't Verify Outputs
From coding tools to clinical decisions, productivity gains collapse at the verification bottleneckW33AI Transparency Tools Fail When Users Can't Evaluate What They See
From clinical diagnosis to companion apps, humans systematically misjudge machine reliabilityW32AI Moves From Tool to Infrastructure, Verification Lags Behind
Systems treat AI as collaborative layer while users struggle to catch hallucinations in real-world deploymentJULY(655 papers)
W31Transparency Doesn't Fix AI—Control Does
Explainability visualizations backfire while practitioners demand the power to contest, not just comprehendW30Systems Generate Faster Than Humans Can Verify
From medical chatbots to image descriptions, the bottleneck is trust calibration, not accuracyW29AI Systems Gain Influence While Losing Accountability
From mental health chatbots to mobile agents, persuasiveness and verifiability have decoupledW28AI Systems Project Confidence Users Cannot Verify
From developer tools to medical models, the gap between AI's certainty and actual reliability is wideningW27AI Systems Work in Labs but Collapse at Deployment
From neural prosthetics to classroom tools, implementation gaps matter more than technical gainsJUNE(497 papers)
W26The Evidence Turns Against Deployed Interventions
Phishing training fails at scale, LLM ideas don't execute, and users can't parse what systems needW25Automation Creates Verification Labor It Cannot Solve
From gig platforms to surgical tools, deployed systems reveal who pays the cost of algorithmic opacityW24The Infrastructure Beneath AI Claims Is Collapsing
Platform APIs fail audits, evaluation methods can't support their conclusions, and expertise resists captureW23AI Systems Crush Benchmarks But Crumble in Deployment
From healthcare to accessibility to workforce automation, technical performance and human utility have divergedMAY(412 papers)
W22Systems Outpace Verification in High-Stakes Domains
From nuclear control rooms to accessibility tools, users can't evaluate what AI executesW21The Verification Bottleneck Arrives
AI systems outpace human evaluation capacity, and transparency doesn't fix itW20Systems Outpace Human Verification Across Healthcare, Work, and Privacy
When AI makes checking harder than doing, deployment reveals the infrastructure gaps research ignoredMARCH(565 papers)
W13Verification Becomes the Bottleneck for AI Deployment
From healthcare to mobile agents, capable systems fail because users can't tell when to trust themW12AI Systems Outpace Human Steering Capacity
Generative models produce impressive outputs users can't reliably control, verify, or trustW11Lab Success Meets Deployment Failure
From healthcare AI to XR interfaces, systems collapse when real-world friction exceeds controlled testingW10Capability Isn't the Bottleneck Anymore
AI verification and AR deployment constraints dominate a week of post-launch reality checksFEBRUARY(313 papers)
W09Capability Isn't the Bottleneck Anymore
Deployed AI fails at dynamic calibration to human context, not at static reasoningW08Capability Isn't the Bottleneck Anymore
From Sierra Leone to VR headsets, the real interface problem is helping users know when to trust systemsW07AI Systems Create Dependency Patterns Users Don't Recognize
From tutoring to ride-hailing, interfaces that scaffold well in testing reshape cognition in deploymentW06AI Works Best When Humans Edit, Not Accept
Six studies show capability isn't the bottleneck—interaction design determines whether AI helps or harmsJANUARY(485 papers)
W05Systems Work But Users Can't Verify Them
From AI explanations to AR privacy, deployment fails when capability outpaces human evaluationW04AI Teammates Make Performance Worse When Tasks Get Hard
Confidence calibration failures and workflow disruption dominate the strongest research week in monthsW03Deployment Kills More Innovations Than Technical Limits
Accessibility systems work in labs but collapse in institutions lacking training, cost models, and implementation scaffoldingW02AI Integration Creates Calibration Problems, Not Just Solutions
From RLHF preference manipulation to automation inequality, deployment reveals messy adaptation workW01Deployment Constraints Now Drive the Research Agenda
From cybersickness prediction to retinal implants, the bottleneck is making systems work in practice