

There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it’s to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft’s works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that’s already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.
The intent, by the way, isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).
The intent isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).