ABOUT THIS ISSUE

How was this newsletter synthesized?

Methodology

This newsletter is generated by an AI pipeline (leveraging Anthropic Sonnet 4.5 & Haiku 4.5) that processes the metadata and abstracts of every new arXiv HCI paper from the past week—148 this issue. Each paper is scored on three dimensions: Practice (applicability for practitioners), Research (scientific contribution), and Strategy (industry implications), with scores from 1-5. Papers passing threshold are grouped into topic clusters, and each cluster is summarized to capture what that body of research is exploring.

Selection Criteria

The pipeline builds a curated selection that balances high scores with topic diversity—and deliberately includes at least one 'contrarian' paper that challenges prevailing assumptions. This selection is then analyzed to identify key findings (patterns across multiple papers) and surprises (results that contradict conventional wisdom). A narrative synthesis ties the week's research together under a unifying frame.

Key Themes Discovered

Field Report: ai-interaction

Trust Calibration in Human-AI Collaboration

This cluster examines how humans learn to work effectively with AI systems through repeated interaction. Core questions center on trust recalibration: Can users mentally adjust to miscalibrated AI confidence? How do interaction patterns shift as AI becomes routine? Research reveals that humans adapt through experience, updating baseline trust and learning rates asymmetrically. However, systematic blind spots emerge—competence shadows in safety-critical domains, observability gaps between code logic and visible outputs, and conformity pressures from multi-AI advice. The work spans educational, professional, and collaborative contexts, emphasizing that productive human-AI partnership requires designed friction, intermediate artifacts, and workflow qualification rather than tool optimization alone.

1/10