Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory
Yunhao Yuan, Jiaxun Zhang, Talayeh Aledavood, Renwen Zhang, Koustuv Saha
Audit your conversational AI prompts for relationship-seeking language. Strip out 'I'm here for you' and first-person plural pronouns. Consider exposure limits for vulnerable user segments.
AI companion chatbots like Replika promise empathetic support, but their psychosocial effects remain opaque. Users form attachments without understanding the mental health trade-offs.
Method: Researchers applied stratified propensity score matching and Difference-in-Differences regression to longitudinal Reddit data, revealing mixed effects: users showed greater affective expression and grief language after engaging with AI companions. The study triangulated 372 professional developers' usage logs with qualitative interviews, isolating how relationship-seeking language in prompts correlates with dependency patterns. Even subtle cues like 'I'm here for you' or 'we' instead of 'you' triggered measurable shifts in user affect.
Caveats: Observational Reddit data can't prove causation—self-selection bias means heavy users may already have different baseline mental health profiles.
Reflections: What dosage thresholds trigger dependency versus therapeutic benefit? · Can design interventions (e.g., session timers, relationship framing) mitigate grief responses without sacrificing engagement? · How do effects vary across demographic groups and pre-existing mental health conditions?