To be fair, though, this experiment was stupid as all fuck. It was run on /r/changemyview to see if users would recognize that the comments were created by bots. The study’s authors conclude that the users didn’t recognize this. [EDIT: To clarify, the study was seeing if it could persuade the OP, but they did this in a subreddit where you aren’t allowed to call out AI. If an LLM bot gets called out as such, its persuasiveness inherently falls off a cliff.]
Except, you know, Rule 3 of commenting in that subreddit is: “Refrain from accusing OP or anyone else of being unwilling to change their view, of using ChatGPT or other AI to generate text, [emphasis not even mine] or of arguing in bad faith.”
It’s like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. “Obviously these are all brainwashed sheep who love the regime”, happily concludes the dumbest pollster in history.
Wow. That’s really fucking stupid.
I don’t think so. Yeah the researchers broke the rules of the subreddit but it’s not like every other company that uses AI for advertising, promotional purposes, propaganda, and misinformation will adhere to those rules.
The mods and community should not assume that just because the rules say no AI does not mean that people won’t use it for nefarious purposes. While this study doesn’t really add anything new we didn’t already know or assume, it does highlight how we should be vigilant and cautious about what we see on the Internet.
That story is crazy and very believable. I 100% believe that AI bots are out there astroturfing opinions on reddit and elsewhere.
I’m unsure if that’s better or worse than real people doing it, as has been the case for a while.
Belief doesn’t even have to factor; it’s a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It’s better over here.
I worry that it’s only better here right now because we’re small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?
Honestly? I’m no expert and have no actionable ideas in that direction, but I certainly hope we’re able to work together as a species to overcome the unchecked greed of a few parasites at the top. #LuigiDidNothingWrong
Err, yeah, I get the meme and it’s quite true in its own way…
BUT… This research team REALLY need an ethics committee. A heavy handed one.
As much as I want to hate the researchers for this, how are you going to ethically test whether you can manipulate people without… manipulating people. And isn’t there an argument to be made for harm reduction? I mean, this stuff is already going on. Do we just ignore it or only test it in sanitized environments that won’t really apply to the real world?
I dunno, mostly just shooting the shit, but I think there is an argument to be made that this kind of research and it’s results are more valuable than the potential harm. Tho the way this particular research team went about it, including changing the study fundamentally without further approval, does pose problems.
Deleted by moderator because you upvoted a Luigi meme a decade ago
…don’t mind me, just trying to make the reddit experience complete for you…
that’s funny.
I had several of my Luigi posts and comments removed – on Lemmy. let’s see if it still holds true.
That’s because your username is wrong. Your username is GreenKnight23@lemmy.world, but it should be GreenKnight23@lemmy.nz. That would fix your problem.
Removed by mod