Judgmental Bot: Conversational AI in Online Mental Health Screening
Ryan Raimi, Antino Kim, Sezgin Ayabakan, Alan R. Dennis
Abstract:
Only a fraction of people struggling with mental health issues seek medical care, partly due to the fear of being judged, so deploying text-based conversational agents (i.e., chatbots) as a mental health screening tool is often viewed as a way to lower a barrier to mental healthcare. We conducted four experiments and a qualitative study and consistently found that participants perceived a text-based chatbot to be more judgmental than a human mental healthcare professional, although the interactions were identical. This greater judgmentalness reduced the willingness to use the service, disclose information, and follow the agent’s recommendations. Judgmentalness was seen as a rush to judgment without a proper grasp of the issues. The chatbot was perceived as more judgmental because it was less capable of deeply understanding the issues (e.g., emotionally and socially) and conveying a sense of being heard and validated. We have long assumed that chatbots can address the real or imagined fear of being judged by others for stigmatized conditions like mental health. Our study shows that perceptions of judgmentalness are the opposite of what we have assumed and that these perceptions significantly influence patients’ acceptance of chatbots for mental health screening.
MIS Quarterly, Forthcoming, Feb 2025