as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.
UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,
So good for you, but also, look out for society, it’s not only going to harm the ones it drives crazy, but the victims of that crazy as well.
“Role-playing machine” is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it’s trivial to push them into nearly any role imaginable. We made an improv bot that’s so good at playing a coder that it can actually code, kinda.
I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.
They’re hotwired eliza with a shit-ton more computational grunt, but they aren’t intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.
AI has not pushed me one inch towards suicide. Then again I treat it like a calculator for words and not a therapist
as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.
UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,
So good for you, but also, look out for society, it’s not only going to harm the ones it drives crazy, but the victims of that crazy as well.
“Role-playing machine” is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it’s trivial to push them into nearly any role imaginable. We made an improv bot that’s so good at playing a coder that it can actually code, kinda.
I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.
They’re hotwired eliza with a shit-ton more computational grunt, but they aren’t intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.