Exactly. To your point, AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input
It seems pretty clear after a few years of people getting AI psychosis that LLMs are an addictive psychological hazard
Exactly. To your point, AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input
It seems pretty clear after a few years of people getting AI psychosis that LLMs are an addictive psychological hazard