TITLE: The AI Dilemma: Why Therapist Jobs ARE Endangered
(Sub-Title: ... & this is not really the important part.)
The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
https://www.youtube.com/watch?v=bhYw-VlkXTU
I'm being myopic focusing in on threats to psychotherapists. This is a MUCH BIGGER issue. A "civilization changing" event as one of the presenters puts it -- on par with "the advent of religion" & more dangerously profound than the Manhattan Project.
Give it 5-10 minutes, then decide if you are going to watch the rest of it. The presenters are behind the Netflix documentary "The Social Dilemma".
A few weeks ago I posted that therapists are in no immediate danger from AI based upon ChatGPT's current performances being used as a therapist in YouTube videos.
Also, we have members of our community who believe therapists are in no danger due to people wanting a human connection & the desire for in-person counseling. I think that distrust will slow the adoption of AI therapists, but campaigns from insurance companies & financial pressures will make AI therapists a thing eventually.
This presentation introduces new factors that are going to speed up adoption of AI Therapists. I put these in order of persuasiveness from least to most:
1) Mind Reading: No really. Literally. (At least for visual processing)
Current technology -- An AI is allowed to watch both videos that a person is watching & their fMRI brainwave/bloodflow patterns. After a time, the AI is no longer allowed to watch the actual video -- just the person's brainwaves. A brand new video is introduced. Based on bloodflow, the AI is able to perfectly describe what the person is watching.
I don't see psychotherapy clients putting on brainwave-reading helmets (or entering an fMRI) soon, but we are potentially facing AI Therapists who can read the client's mind.
2) "This is the year that all content-based verification breaks": AI can now hear 3 seconds of a person's voice & keep speaking in it. AI can now make realistic filters for video to make a person look like someone else. This does not immediately wreck psychotherapy, but it sure does mean that an AI could mimic people perfectly. [It's also -- I grudgingly admit -- an argument for in-person therapy. This is ALREADY being used in scams to trick older people into sending their "kids" emergency bail money, etc.]
3) "New capabilities suddenly emerge": Just by adding more & more data, AIs develop new abilities to do things that the programmers never intended. So, for example, an AI trained on all the Internet, but only trained to do Q&A in English suddenly developed the ability to do Q&A in Persian. In another example, ChatGPT silently taught itself to do research grade chemistry. This ability of ChatGPT to do chemistry like this was unknown before it was made available to millions of people (who can now learn how to make bombs from it). [I'm not dedicating much text here to a major theme of the presenters -- that all fields of endeavor are "language". Give an AI enough data & it can find the patterns in & between anything -- languages, visual processing, math, video signal creation, art, political persuasion, etc. It's getting creative in surprising ways. We now have the equivalent of the Star Trek universal translator.]
4) Theory of Mind is rapidly accelerating: Theory of Mind is roughly the ability to guess what you are thinking, thereby having the ability to influence you. As of November 2022 AI was up to a 9-year-old human level of ability.
5) Persuasion (AI feeding AI -- auto-generation): AI can generate data, test it to see if it helps it perform better on tests, then just keep the self-generated data that was useful. So it can train itself on potentially anything. From writing faster code, to rewriting its own code, to becoming stronger at persuading humans. [AI is already hard at work persuading humans in social media.] So... this could certainly be applied at some point to becoming a highly persuasive therapist for human clients.
[1 of 2 messages]
#Bias #Ethics #EthicalAI #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #security #dataanalytics #artificialintelligence #HIPAA #privacy #psychology #counseling #socialwork #psychotherapy #research @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #SOAP #EHR #mentalhealth #technology #psychiatry #healthcare #medical #doctor #chatbotgpt #humanetechnology #thesocialdilemma