Yeah, it admitted to an error in judgement because the prompter clearly declared it so.
Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.
Which what seems to make it so dangerous for some people’s mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It’s an echo chamber.
Yeah, it admitted to an error in judgement because the prompter clearly declared it so.
Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.
Which what seems to make it so dangerous for some people’s mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It’s an echo chamber.