The Risks and Potential of Large Language Models in Mental Health Care A Critical Analysis through the Lens of Data Feminism
Main Article Content
Abstract
ChatGPT and other large language model (LLM) based chatbots have amassed millions of users in the past few years, prompting great interest in other LLM applications. In particular, there has been substantial research into the potential use of LLMs in mental health settings (Muetenda et al. 2025; Olawade et al. 2024). However, given that artificial intelligence (AI) has been found to demonstrate gender and racial biases (among other types) (Klein and D’Ignazio 2024), it is important to examine the ethical implications of such technologies. In this essay, I will examine the use of LLMs in therapeutic contexts through the lens of data feminism in AI (Klein and D’Ignazio 2024), a set of intersectional feminist principles introduced by Klein and D’Ignazio to challenge the power imbalances in data science and AI by exploring potential risks and offering suggestions to avoid them. It is important to note that the intersectional feminist lens is far from exhaustive; thus, future work should examine the use of LLMs in mental health care from other perspectives as well to prevent harm and ensure that the benefits of technological developments are spread equitably. I can easily imagine a future where LLMs are used to make mental health care more accessible and effective for everyone; I can just as easily imagine a future where they are weaponized to further subjugate disenfranchised communities. How do we steer towards the future we want? In this essay I hope to start exploring the answer to this question.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.