4 principles to guide AI in supporting mental health

Mental health disorders are a pressing issue where one in every two people globally will develop a mental health condition at some point1. Unfortunately, far too few people have access to high-quality mental healthcare. As a psychologist, I’ve seen this chasm between demand and supply result in people falling through the cracks with devastating outcomes. At Google, we’re committed to doing our part to support mental health, and recognize that AI has the potential to help address this need by expanding access to education, assessment and intervention. For example, AI models trained on mental health data can generate resources that help with tasks such as training providers, making diagnoses and implementing interventions. But AI can only be useful if we approach this opportunity — and challenge — in the right way.

To explore both sides of this equation, I worked with fellow researchers to publish "The Opportunities and Risks of Large Language Models in Mental Health," a new paper in JMIR Mental Health. In our discussions, we identified four key ways to think about creating AI that can support people’s mental health around the world. Given the barriers to accessing high-quality mental healthcare, we see real potential for the role of AI despite the limitations that exist with even the best trained technology today.

Focus on responsible development

Researchers and developers need to design and test AI models ethically and responsibly. As just one example, these models should only perform clinical tasks when they can handle them at least as well as human providers. To reach that threshold, AI models need to be fine-tuned for mental health. It's also essential to test models to make sure they're reliable (perform consistently) and valid (perform in line with evidence-based practice). For instance, if AI is going to answer people's mental health questions or support therapists in providing treatments, the model should be safe, reliable and accurate.

Advance mental health equity

Unfortunately, there are inequities in who receives which mental health diagnoses, along with disparities in who has access to different kinds of mental healthcare. Stigma can also get in the way of getting support.

It’s imperative to train models to reflect the diversity of the people who will interact with the AI models in question — otherwise, you risk producing models that work differently with different groups of people. It’s also important to make use of frameworks that can assess AI-generated performance for equity-related problems. And when researchers and developers do identify problems, they should communicate those issues clearly and rework the models as needed until they can ensure equitable performance.

Protect privacy and safety

Privacy and safety are paramount in mental health-related AI. Anyone interacting with AI for mental health reasons should first need to provide informed consent, including understanding what expectations of privacy they can reasonably have along with any limits to those expectations. Given the sensitivity of personal, mental health information, the developers of mental health AI models should design those models to comply with relevant data protection laws in their region (e.g., in the United States, the Health Insurance Portability and Accountability Act [HIPAA]).

When it comes to mental health, safety also includes directing people to human providers and higher levels of care when symptoms worsen or when risk for serious mental health concerns like self-harm come up. Ultimately, appropriate trust is only garnered when AI models keep mental health information private and when people are kept safe.

Keep people in the loop

People should provide oversight and feedback in every stage of developing and deploying AI to support mental health.

Rigorous, ongoing human involvement can help make AI models for mental health more accurate and uncover potentially problematic responses. For instance, a model can suggest wording for a mental health practitioner to use in their clinical notes, but the practitioner should still decide whether to include that language.

When it comes to responsible use and equity, researchers and developers should actively seek feedback from individuals who reflect the diverse populations they’re aiming to help. That includes those with lived experiences with mental health concerns and clinicians. Through this kind of collaboration, people are able to co-define the role AI plays in mental healthcare; help to identify and correct biases; and ensure AI-generated content is inclusive, culturally appropriate and accurate.

We know technology can only do so much. However, I believe with these safeguards in mind, AI can play a role to help to close the ever-widening gap between the need for mental health services and the availability of quality mental health information and providers.

Blog Article: Here

  • Related Posts

    60 of our biggest AI announcements in 2024

    Recap some of Google’s biggest AI news from 2024, including moments from Gemini, NotebookLM, Search and more.

    Our remedies proposal in DOJ’s search distribution case

    Today we filed Google’s remedies proposal based on the actual findings in the Court’s decision — our agreements with partners to distribute search.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    From Generative to Agentic AI, Wrapping the Year’s AI Advancements

    From Generative to Agentic AI, Wrapping the Year’s AI Advancements
    Announcing CodeQL Community Packs

    60 of our biggest AI announcements in 2024

    60 of our biggest AI announcements in 2024

    Our remedies proposal in DOJ’s search distribution case

    Our remedies proposal in DOJ’s search distribution case

    How Chrome’s Autofill can drive more conversions at checkout

    How Chrome’s Autofill can drive more conversions at checkout

    The latest AI news we announced in December

    The latest AI news we announced in December