A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.
“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.
Look, if you can afford therapy, really, fantastic for you. But the fact is, it’s an extremely expensive luxury, even at poor quality, and sharing or unloading your mental strain with your friends or family, particularly when it is ongoing, is extremely taxing on relationships. Sure, your friends want to be there for you when they can, but it can put a major strain depending on how much support you need. If someone can alleviate that pressure and that stress even a little bit by talking to a machine, it’s in extremely poor taste and shortsighted to shame them for it. Yes, they’re willfully giving up their privacy, and yes, it’s awful that they have to do that, but this isn’t like sharing memes… in the hierarchy of needs, getting the pressure of those those pent up feelings out is important enough to possibly be worth the trade-off. Is it ideal? Absolutely not. Would it be better if these systems were anonymized? Absolutely. But humans are natural anthropomorphizers. They develop attachments and build relationships with inanimate objects all the time. And a really good therapist is more a reflection for you to work through things yourself anyway, mostly just guiding your thoughts towards better patterns of thinking. There’s no reason the machine can’t do that, and while it’s not as good as a human, it’s a HUGE improvement on average over nothing at all.
Removed by mod
In my experience, it’s likely that some of those downvotes come from reflexive “AI bad! How dare you say AI good!” Reactions, not anything specific to mental health. For a community called “technology” there’s a pretty strong anti-AI bubble going on here.
Literally yesterday we had post about getting involuntarily committed due to psychosis from AI sycophantically agreeing with them about everything. The quote I remember from the ai in that “yes you should want blood. You’re not wrong.”
Using these as therapy is probably the worst thing we could do.
Are you surprised people have opinions about technology, in a community dedicated to discussing technology?
No, just surprised about how uninformed and knee-jerk those opinions are.
You know, I don’t even disagree with that sentiment in principle, but expecting people to suffer when they could benefit from a technology because they only see the threats and dangers makes them no different than antivaxxers.
It is possible and logically consistent to urge caution and condemn the worst abuses of technology without throwing the baby out with the bath water.
But no… I guess because the awful aspects of the technology as far as IP theft are - rightfully - the biggest focus, sorry, poor people, you just have to keep sucking it up and powering through! You want empathy, fork over the $100 an hour!