Stanford scholars discuss the benefits and risks of using talking software to address mental health

Conversational software programs might provide patients a less risky environment for discussing mental health, but they come with some risks to privacy or accuracy. Stanford scholars discuss the pros and cons of this trend.

Interacting with a machine may seem like a strange and impersonal way to seek mental health care, but advances in technology and artificial intelligence are making that type of engagement more and more a reality. Online sites such as 7 Cups of Tea and Crisis Text Line are providing counseling services via web and text, but this style of treatment has not been widely utilized by hospitals and mental health facilities.

young woman on a bench looking at her cell phone

Conversational software programs are making it possible for people to seek mental health care online and via text, but the risks and benefits need further study, Stanford experts say. (Image credit: roshinio / Getty Images)

Stanford scholars Adam Miner, Arnold Milstein and Jeff Hancock examined the benefits and risks associated with this trend in a Sept. 21 article in the Journal of the American Medical Association. They discuss how technological advances now offer the capability for patients to have personal health discussions with devices like smartphones and digital assistants.

Stanford News Service interviewed Miner, Milstein and Hancock about this trend.

 

Why would conversational agents – software programs that converse with users through voice or text – be effective for mental health care? Which aspects of mental health care could they be applied to?

Miner: Talking to another person about mental health can be scary and often treatment is hard to access. Conversational agents may allow people to share experiences they don’t want to talk about with another person. If successful, this technology could recognize and respond to mental health needs. People may be more honest about their symptoms.

Hancock: They also can be available when needed. Delivering health care when it’s most needed can make these conversational agents really effective for people.

 

How could interacting with this technology be more beneficial to a patient than a human mental health professional?

Hancock: I’m not sure that it could ever be more beneficial than interacting with a human mental health professional, but they could play a role in simply being available. That is, there are only so many mental health professionals, and they can’t be of assistance to all who need them all the time. So, these programs can at least play a role in helping to triage.

Miner: Most people don’t like feeling judged. Talking to a machine may feel like a safer way to share experiences without feeling ashamed. Also, their value may not be in being “better” than a well-trained clinician, but in their accessibility and scalability.

 

Are there risks associated with this technology?

Miner: If a user has a negative experience disclosing mental health problems to a conversational agent, he or she may be less willing to seek help in the future. Also, human-to-human connection is an important part of healing. A balance must be struck between high-tech and high-touch treatment.

Hancock: Yes, and importantly, we don’t even know what all the risks are because the psychological aspects are so understudied. One concern is what happens over longer interactions – does the benefits of interacting with a conversational agent fade or even become negative? Could interacting with a machine over time lead to a sense of loneliness or disconnection, or even become a crutch in the form of preferring to interact with a machine than other people?

 

What are some of the dangers with regards to privacy?

Miner: Privacy is incredibly important and we have to get it right to build trust. User expectations of privacy are unclear. A conversation may feel more private, but might have a higher risk of being remembered forever or shared in unexpected ways through social media or services that track online behavior.

 

Your article mentions that hundreds of thousands of people have already engaged in similar technology-based interactions – for example, 7 Cups of Tea, Talkspace. What must occur for widespread adoption at hospitals, mental health facilities, etc.?

Milstein: Mainstream health care organizations are unlikely to adopt this innovation until there is plausible evidence of therapeutic benefit and applicability of HIPAA privacy rules is clarified.

Miner: There is a growing demand for safe, scalable and cost-effective mental health treatment. Clinical trials can address safety and efficacy, but clarity around user expectations and rules governing medical devices are needed.

Hancock: The success of 7 Cups of Tea and others, like Crisis Text Line, indicates that mental health through text and with the phone or computer is viable. What’s needed next is improved technology along with the required research to understand what kind of conversational agent will be most beneficial and avoid harms. Some of our research suggests that people can get the same kind of psychological benefits disclosing to a machine as to another human – at least in a one-off interaction. We still don’t know about long-term interactions, however.

Adam Miner is an AI psychologist and instructor in Stanford’s Department of Psychiatry and Behavioral Sciences. Arnold Milstein, a professor of medicine, is the director of Stanford’s Clinical Excellence Research Center. Jeff Hancock is a professor of communication and director of the Stanford Center for Computational Social Science.