Transcriptions are exactly the kind of thing ai is perfect for tho, there’s no judgement involved and since it’s timestamped you can instantly check/correct any mistakes against the original.
I wouldn’t want a llm therapist either but a llm stenographer is fine imo
At the same time, my last therapist was very discretionary in what she wrote down. She didn’t want any sensitive information to be able to be obtained by a hostile government. This is common practice. Full transcription removes that discretion
If it is even possible for it to hallucinate a session, it is implemented wrong. It should start transcribing once the session starts and everything it transcribed is saved as that session. Once the session stops the AI is turned of. It never has a chance to hallucinate a season because the AI never decides what a session is.
But we all know the AI will run permanently with full access to read, edit and delete everything…
I was just thinking that myself, it shouldn’t be possible for a speech to text to be able to hallucinate. It might put the wrong word down, but it’s not like it’s going to imagine entire conversations.
See better help
Basically every SaaS therapy company (Better Help, Talkspace, Rula, etc) is doing a lot of shit with LLMs.
Even stuff like SimplePractice (which is a very basic EHR) is offering AI session transcriptions now.
Transcriptions are exactly the kind of thing ai is perfect for tho, there’s no judgement involved and since it’s timestamped you can instantly check/correct any mistakes against the original.
I wouldn’t want a llm therapist either but a llm stenographer is fine imo
At the same time, my last therapist was very discretionary in what she wrote down. She didn’t want any sensitive information to be able to be obtained by a hostile government. This is common practice. Full transcription removes that discretion
Yes, and we have had those for over a decade. I find the newer LLM based transcription far worse than what we had before with other forms of AI.
Unless it hallucinates a session and the provider doesn’t notice…
If it is even possible for it to hallucinate a session, it is implemented wrong. It should start transcribing once the session starts and everything it transcribed is saved as that session. Once the session stops the AI is turned of. It never has a chance to hallucinate a season because the AI never decides what a session is.
But we all know the AI will run permanently with full access to read, edit and delete everything…
I was just thinking that myself, it shouldn’t be possible for a speech to text to be able to hallucinate. It might put the wrong word down, but it’s not like it’s going to imagine entire conversations.
i always assumed those companies were AI from the get-go, basically chatboxes. might as well go to webmd for your symptoms.