I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?

You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.

  • @Encode1307@lemm.ee
    link
    fedilink
    English
    11 year ago

    Most basic therapy dealing with relatively simple problems like mild to moderate depression and anxiety will likely be pretty responsive to AI based treatment, but people with serious and persistent mental illness will still need therapists.

  • Kalash
    link
    fedilink
    0
    edit-2
    1 year ago

    I think you’re taking South Park too seriously.

  • @DABDA@lemmy.world
    link
    fedilink
    English
    61 year ago

    All my points have already been (better) covered by others in the time it took me to type them, but instead of deleting will post anyway :)


    If your concerns are about AI replacing therapists & psychologists why wouldn’t that same worry apply to literally anything else you might want to pursue? Ostensibly anything physical can already be automated so that would remove “blue-collar” trades and now that there’s significant progress into creative/“white-collar” sectors that would mean the end of everything else.

    Why carve wood sculptures when a CNC machine can do it faster & better? Why learn to write poetry when there’s LLMs?

    Even if there was a perfect recreation of their appearance and mannerisms, voice, smell, and all the rest – would a synthetic version of someone you love be equally as important to you? I suspect there will always be a place and need for authentic human experience/output even as technology constantly improves.

    With therapy specifically there’s probably going to be elements that an AI can [semi-]uniquely deal with just because a person might not feel comfortable being completely candid with another human; I believe that’s what using puppets or animals or whatever to act as an intermediary are for. Supposedly even a really basic thing like ELIZA was able convince some people it was intelligent and they opened up to it and possibly found some relief from it, and there’s nothing in it close to what is currently possible with AI. I can envision a scenario in the future where a person just needs to vent and having a floating head just compassionately listen and offer suggestions will be enough; but I think most(?) people would prefer/need an actual human when the stakes are higher than that – otherwise the suicide hotlines would already just be pre-recorded positive affirmation messages.

  • @theherk@lemmy.world
    link
    fedilink
    41 year ago

    Many valid points here, but here is a slightly different perspective. Let’s say for the sake of discussion AI is somehow disruptive here. So?

    You cannot predict what will happen in this very fast space. You should not attempt to do so in a way that compromises your path toward your interests.

    If you like accounting or art or anything else that AI may disrupt… so what? Do it because you are interested. It may be hyper important to have people that did so in any given field no matter how unexpected. And most importantly, doing what interest you is always at least part of a good plan.

  • @realharo@lemm.ee
    link
    fedilink
    3
    edit-2
    1 year ago

    It’s definitely possible, but such an AI would probably be good enough to take over every other field too. So it’s not like you can avoid it by choosing something else anyway.

    And the disruption would be large enough that governments will have to react in some way.

  • @TimewornTraveler@lemm.ee
    link
    fedilink
    291 year ago

    homie lemme let you in on a secret that shouldn’t be secret

    in therapy, 40% of positive client outcomes come from external factors changing

    10% come from my efforts

    10% come from their efforts

    and the last 40% comes from the therapeutic alliance itself

    people heal through the relationship they have with their counselor

    not a fucking machine

    this field ain’t going anywhere, not any time soon. not until we have fully sentient general ai with human rights and shit

  • hugz
    link
    fedilink
    61 year ago

    The caring professions are often considered to be among the safest professions. “Human touch” is very important in therapy

  • RachelRodent
    link
    fedilink
    51 year ago

    I think it is one of these things that AI can’t make redundant, never.

  • @KISSmyOS@lemmy.world
    link
    fedilink
    31 year ago

    I’m sure as fuck glad my therapist is a human and not a Chatbot.

    Also, psychologists will be needed to design AI interfaces so humans have an easy time using them.
    A friend of mine studied psychology and now works for a car company, designing their infotainment system UI so that people can instinctively use it without consulting a manual. Those kinds of jobs will become more, not less in the future.

  • Lvxferre
    link
    fedilink
    21 year ago

    If you’re going to avoid psychology, do it because of the replication crisis. What is being called “AI” should play no role on that. Here’s why.

    Let us suppose for a moment that some AI 7y from now is able to accurately diagnose and treat psychological issues that someone might have. Even then the AI in question is not a moral agent that can be held responsible for its actions, and that is essential when you’re dealing with human lives. In other words you’ll still need psychologists picking the output of said AI and making informed decisions on what the patient should [not] do.

    Furthermore, I do not think that those “AI systems” will be remotely as proficient at human tasks in, say, a decade, as some people are claiming that they will be. AI is a misnomer, those systems are not intelligent. Model-based text generators are a great example of that (and relevant in your case): play a bit with ChatGPT or Bard, and look at their output in a somewhat consistent way (without cherry picking the hits and ignoring the misses). Then you’ll notice that they don’t really understand anything - they’re reproducing grammatical patterns regardless of their content. (Just like they were programmed to.)

      • Lvxferre
        link
        fedilink
        21 year ago

        It boils down to scientists not knowing if they’re actually reaching some conclusion or just making shit up. It’s actually a big concern across multiple sciences, it’s just that Psychology is being hit really hard, and for clinical psychologists this means that they simply can’t trust as much the theoretical frameworks guiding their decisions as they were supposed to.

  • @Nonameuser678@aussie.zone
    link
    fedilink
    81 year ago

    Psychotherapy is about building a working relationship. Transference is a big part of this relationship. I don’t feel like I’d be able to build the same kind of therapeutic relationship with an AI that I would with another human. That doesn’t mean AI can’t be a therapeutic tool. I can see how it could be beneficial with things like positive affirmations and disrupting negative thinking patterns. But this wouldn’t be a substitute for psychotherapy, just a tool for enhancing it.

  • @nottheengineer@feddit.de
    link
    fedilink
    151 year ago

    It’s just like with programming: The people who are scared of AI taking their jobs are usually bad at them.

    AI is incredibly good at regurgitating information and translation, but not at understanding. Programming can be viewed as translation, so they are good at it. LLMs on their own won’t become much better in terms of understanding, we’re at a point where they are already trained on all the good data from the internet. Now we’re starting to let AIs collect data directly from the world (chatGPT being public is just a play to collect more data), but that’s much slower.

    • @Cossty@lemmy.worldOP
      link
      fedilink
      41 year ago

      I am not a psychologist yet. I only have a basic understanding of the job description but it is a field that I would like to get into.

      I guess you are right. If you are good at your job, people will find you just like with most professions.

    • @intensely_human@lemm.ee
      link
      fedilink
      11 year ago

      The web is one thing, but access to senses and a body that can manipulate the world will be a huge watershed moment for AI.

      Then it will be able to learn about the world in a much more serious way.

    • @Nibodhika@lemmy.world
      link
      fedilink
      21 year ago

      I slightly disagree, in general I think you’re on point, but artists specially are actually being fired and replaced by AI, and that trend will continue untill there’s a major lawsuit because someone used a trademarked thing from another company.

  • @Evilschnuff@feddit.de
    link
    fedilink
    51 year ago

    There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.

    • @intensely_human@lemm.ee
      link
      fedilink
      01 year ago

      Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

      I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

      It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

      If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

      What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

      That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

      You could give it a few basic motivations then instruct it to act that out every day.

      Now I’m not saying that they’re conscious, that they feel as we feel.

      But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

      Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

      • @Evilschnuff@feddit.de
        link
        fedilink
        11 year ago

        The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.

        • @intensely_human@lemm.ee
          link
          fedilink
          11 year ago

          We don’t have stable lifelong learning yet

          I covered that with the long term memory structure of an LLM.

          The only problem we’d have is a delay in response on the part of the robot during conversations.

          • @Evilschnuff@feddit.de
            link
            fedilink
            1
            edit-2
            1 year ago

            LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.

  • livus
    link
    fedilink
    51 year ago

    If you have a talk with the AI called Pi, it talks like a therapist. It’s impressive at first but you can’t escape the knowledge that it dgaf about you.

    And that’s a trait people really don’t want in a therapist.

      • rynzcycle
        link
        fedilink
        41 year ago

        You jest, but honestly this is what helped me. I felt very alone, deeply depressed and held a long rooted belief that I wasn’t important enough to deserve better.

        Knowing that this person was listening because they were being paid/it was their job, helped be get past the guilt and open up. Likely saved my life. AI would not have given me that.