ABOUT THIS ISSUE

How was this newsletter synthesized?

Methodology

This newsletter is generated by an AI pipeline (leveraging Anthropic Sonnet 4.5 & Haiku 4.5) that processes the metadata and abstracts of every new arXiv HCI paper from the past week—82 this issue. Each paper is scored on three dimensions: Practice (applicability for practitioners), Research (scientific contribution), and Strategy (industry implications), with scores from 1-5. Papers passing threshold are grouped into topic clusters, and each cluster is summarized to capture what that body of research is exploring.

Selection Criteria

The pipeline builds a curated selection that balances high scores with topic diversity—and deliberately includes at least one 'contrarian' paper that challenges prevailing assumptions. This selection is then analyzed to identify key findings (patterns across multiple papers) and surprises (results that contradict conventional wisdom). A narrative synthesis ties the week's research together under a unifying frame.

Key Themes Discovered

Field Report: ai-interaction

Trust, Agency, and Calibration

This cluster examines how users navigate trust, control, and decision-making when interacting with AI systems across diverse contexts—from coding to writing to therapy. Core tensions emerge: AI assistance improves efficiency but erodes psychological ownership; proactive suggestions work best at workflow boundaries; users detect sycophancy through inconsistency testing and develop context-dependent mitigation strategies. Research reveals that effective human-AI interaction requires calibrating user expectations to actual AI capabilities, timing interventions to cognitive states, and preserving user agency through design choices like on-demand initiation and style personalization. Epistemically, the field reframes complementarity as reliability assessment rather than performance metrics alone.

1/10