People Seek Mental Health Support From ChatGPT
With AI continuously advancing, the technology’s capabilities are expanding – even to the realm of mental health care.
The use of chatbots, most notably ChatGPT, powered by artificial intelligence (AI) has officially become a trend. AI chatbots offer users the service to complete various tasks, from writing essays to composing music. With AI continuously advancing, the technology’s capabilities are expanding – even to the realm of mental health care.
In an effort to investigate public opinion on AI regarding healthcare, Tebra surveyed 1,000 Americans. The study found that 1 in 4 Americans prefer talking to an AI chatbot instead of a licensed therapist. Additionally, 80% of participants who used ChatGPT for advising purposes believe it is an effective therapy alternative.
Some people took to social media to share their experiences with using AI chatbots for therapy. By presenting the technology with mental health questions or crises, people receive advice conveniently from their device without having to spend money on therapy sessions.
TikTok user, Kyla, began experimenting with AI and was impressed with how much it resembled having a conversation with a human. Kyla expressed that she lacked the time and money for a therapist during her time of emotional support. Therefore, she turned to ChatGPT for mental health support. According to Kyla, the conversations she had with the AI chatbot reminded her of attending actual therapy sessions.
@jennatohls Disregard my breakout it is kaisers fault #chatgpt #therapy #therapydupe ♬ blue hair - ☆
“I enjoyed that I could trauma dump on ChatGPT anytime and anywhere, for free, and I would receive an unbiased response in return along with advice on how to progress with my situation,” TikTok user, Kyla told BuzzFeed News.
Despite the AI chatbots’ seemingly effective responses to concerns about mental health, psychologists emphasize that substituting traditional therapy with AI is not yet a safe option. ChatGPT warns users that the technology “may occasionally generate incorrect information,” or “may occasionally produce harmful instructions or biased content.”