Multi-Tool Analysis of User Interface & Accessibility in Deployed Web-Based Chatbots
Mukesh Rajmohan, Smit Desai, Sanchari Das
Run multi-tool accessibility audits before deploying conversational interfaces. Prioritize embedded widgets—they fail more often than standalone apps. Don't rely on automated tools alone; manual keyboard navigation testing catches the remaining 40%.
Over 80% of deployed chatbots ship with at least one critical accessibility issue. 45% lack semantic structure entirely, making them unusable for screen readers.
Method: The team audited 106 production chatbots across healthcare, education, and customer service using four tools: Google Lighthouse, PageSpeed Insights, SiteImprove, and Microsoft Accessibility Insights. They found missing ARIA labels, broken keyboard navigation, and color contrast failures clustered in embedded widgets more than standalone apps. The multi-tool approach caught issues single audits missed—Lighthouse alone flagged only 60% of the problems manual audits revealed.
Caveats: Study focused on web-based chatbots only. Native mobile apps and voice interfaces weren't tested.
Reflections: Do chatbot frameworks introduce systematic accessibility debt, or is this a training gap among developers? · Which accessibility issues correlate with user abandonment in production? · Can automated testing tools be trained to catch the 40% of issues that currently require manual audits?