ABOUT THIS ISSUE

How this newsletter was synthesized?

Methodology

This newsletter is generated by an AI pipeline (leveraging Anthropic Sonnet 4.5 & Haiku 4.5) that processes the metadata and abstracts of every new arXiv HCI paper from the past week—30 this issue. Each paper is scored on three dimensions: Practice (applicability for practitioners), Research (scientific contribution), and Strategy (industry implications), with scores from 1-5. Papers passing threshold are grouped into topic clusters, and each cluster is summarized to capture what that body of research is exploring.

Selection Criteria

The pipeline builds a curated selection that balances high scores with topic diversity—and deliberately includes at least one 'contrarian' paper that challenges prevailing assumptions. This selection is then analyzed to identify key findings (patterns across multiple papers) and surprises (results that contradict conventional wisdom). A narrative synthesis ties the week's research together under a unifying frame.

Key Themes Discovered

Field Report: ai-interaction

Agents Under Scrutiny

This cluster examines how autonomous AI agents behave in real-world deployment contexts and where they fail. Core questions: How do agents respond to adversarial manipulation? What constitutes trustworthy collaborative behavior? How do humans and agents negotiate agency in physical and digital tasks? Research spans adversarial robustness (prompt injection attacks), behavioral evaluation frameworks beyond correctness, human-centered interface design, and domain-specific performance gaps. Methodologically diverse—combining benchmarking, user studies, interpretability analysis, and prototype evaluation—but unified by concern for agent reliability and human oversight in consequential domains (customer service, software engineering, mental health, energy management).

1/5