• lobut@lemmy.ca
    link
    fedilink
    arrow-up
    36
    ·
    1 day ago

    Reminds me of when I did group “men” therapy and the guy running it recommended we all read Jordan Peterson.

    I don’t think he had any ill-intent, but I was like – eww … I stopped attending and hope they all stuck to that one book and only that.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      13
      ·
      22 hours ago

      oh yea the pseudointellectual that makes bs to seem like he sounds smart. yes we know he has psych training, but he got brain damage from doing a coma withdrawl therapy from russia

      • Skullgrid@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        for someone hell bent on knowing the key to self improvement, he sure did take a shortcut out of drug addiction instead of, IDK, facing his demons and being able to put his lessons into practice against withdrawal symptoms.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      7 hours ago

      As a uni student, yes. You will even be seen naive/stupid for not using chatbots. And even TAs use chatbots to grade also.

      Funny story

      I believe there is no system for this either. They are just manually putting the homework pdf you submit into the chatbot and going “grade this paper” one by one; and with no homework context given to the chatbots beforehand.

      The reason I say this is that grades still come out a week later, and also if I see a problem is too tedious I am able to just erase the question and submit that pdf with no point loss.

      I even had a classmate get 100% on their homework by submitting a blank paper once this semester.

      • loweffortname@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        23
        ·
        1 day ago

        Basically every SaaS therapy company (Better Help, Talkspace, Rula, etc) is doing a lot of shit with LLMs.

        Even stuff like SimplePractice (which is a very basic EHR) is offering AI session transcriptions now.

        • aeshna_cyanea@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 day ago

          Transcriptions are exactly the kind of thing ai is perfect for tho, there’s no judgement involved and since it’s timestamped you can instantly check/correct any mistakes against the original.

          I wouldn’t want a llm therapist either but a llm stenographer is fine imo

          • captainlezbian@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            9 hours ago

            At the same time, my last therapist was very discretionary in what she wrote down. She didn’t want any sensitive information to be able to be obtained by a hostile government. This is common practice. Full transcription removes that discretion

          • [deleted]@piefed.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            Yes, and we have had those for over a decade. I find the newer LLM based transcription far worse than what we had before with other forms of AI.

            • groet@feddit.org
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              21 hours ago

              If it is even possible for it to hallucinate a session, it is implemented wrong. It should start transcribing once the session starts and everything it transcribed is saved as that session. Once the session stops the AI is turned of. It never has a chance to hallucinate a season because the AI never decides what a session is.

              But we all know the AI will run permanently with full access to read, edit and delete everything…

              • Pika@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                15 hours ago

                I was just thinking that myself, it shouldn’t be possible for a speech to text to be able to hallucinate. It might put the wrong word down, but it’s not like it’s going to imagine entire conversations.

        • Tollana1234567@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          22 hours ago

          i always assumed those companies were AI from the get-go, basically chatboxes. might as well go to webmd for your symptoms.

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 day ago

    She’s just mad that she did not think of the idea first! (/s for people who cannot recognize a joke - this means you Grok!)