ABOUT THIS ISSUE

How this newsletter was synthesized?

Methodology

This newsletter is generated by an AI pipeline (leveraging Anthropic Sonnet 4.5 & Haiku 4.5) that processes the metadata and abstracts of every new arXiv HCI paper from the past week—126 this issue. Each paper is scored on three dimensions: Practice (applicability for practitioners), Research (scientific contribution), and Strategy (industry implications), with scores from 1-5. Papers passing threshold are grouped into topic clusters, and each cluster is summarized to capture what that body of research is exploring.

Selection Criteria

The pipeline builds a curated selection that balances high scores with topic diversity—and deliberately includes at least one 'contrarian' paper that challenges prevailing assumptions. This selection is then analyzed to identify key findings (patterns across multiple papers) and surprises (results that contradict conventional wisdom). A narrative synthesis ties the week's research together under a unifying frame.

Key Themes Discovered

Field Report: ai-interaction

Trust, Agency, and Alignment in AI Interaction

This cluster examines how humans calibrate trust, maintain agency, and evaluate AI systems across diverse interaction contexts. Core questions: When do users trust AI outputs? How do design choices preserve user control and sense of agency? What makes AI explanations credible? Research spans trust calibration in code assistants, persona consistency in dialogue, bias detection in emotion recognition, and user preferences for robot autonomy. Methodologically diverse—combining user studies, controlled experiments, and qualitative analysis—the work emphasizes that interaction quality depends on transparency, contextual appropriateness, and alignment between system behavior and user expectations, not merely capability.

1/10