
Reminds me of when I did group “men” therapy and the guy running it recommended we all read Jordan Peterson.
I don’t think he had any ill-intent, but I was like – eww … I stopped attending and hope they all stuck to that one book and only that.
oh yea the pseudointellectual that makes bs to seem like he sounds smart. yes we know he has psych training, but he got brain damage from doing a coma withdrawl therapy from russia
for someone hell bent on knowing the key to self improvement, he sure did take a shortcut out of drug addiction instead of, IDK, facing his demons and being able to put his lessons into practice against withdrawal symptoms.
Stories like this one convince Me that the man community is largely toxic.
Jesus christ that’s what it’s coming to… isn’t it
As a uni student, yes. You will even be seen naive/stupid for not using chatbots. And even TAs use chatbots to grade also.
Funny story
I believe there is no system for this either. They are just manually putting the homework pdf you submit into the chatbot and going “grade this paper” one by one; and with no homework context given to the chatbots beforehand.
The reason I say this is that grades still come out a week later, and also if I see a problem is too tedious I am able to just erase the question and submit that pdf with no point loss.
I even had a classmate get 100% on their homework by submitting a blank paper once this semester.
See better help
Basically every SaaS therapy company (Better Help, Talkspace, Rula, etc) is doing a lot of shit with LLMs.
Even stuff like SimplePractice (which is a very basic EHR) is offering AI session transcriptions now.
Transcriptions are exactly the kind of thing ai is perfect for tho, there’s no judgement involved and since it’s timestamped you can instantly check/correct any mistakes against the original.
I wouldn’t want a llm therapist either but a llm stenographer is fine imo
At the same time, my last therapist was very discretionary in what she wrote down. She didn’t want any sensitive information to be able to be obtained by a hostile government. This is common practice. Full transcription removes that discretion
Yes, and we have had those for over a decade. I find the newer LLM based transcription far worse than what we had before with other forms of AI.
Unless it hallucinates a session and the provider doesn’t notice…
If it is even possible for it to hallucinate a session, it is implemented wrong. It should start transcribing once the session starts and everything it transcribed is saved as that session. Once the session stops the AI is turned of. It never has a chance to hallucinate a season because the AI never decides what a session is.
But we all know the AI will run permanently with full access to read, edit and delete everything…
I was just thinking that myself, it shouldn’t be possible for a speech to text to be able to hallucinate. It might put the wrong word down, but it’s not like it’s going to imagine entire conversations.
i always assumed those companies were AI from the get-go, basically chatboxes. might as well go to webmd for your symptoms.
I once had a career counselor who acted exactly like this lol. It was extremely unhelpful.
She’s just mad that she did not think of the idea first! (/s for people who cannot recognize a joke - this means you Grok!)
Must be intentional. https://en.wikipedia.org/wiki/180-degree_rule
?









