The beginning of the article is pretty weak, especially Masnick kinda defending addictive design:
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
But I gotta say, it does seem like this could set a dangerous precedent. If it becomes easy to file cases for design decisions on the platfor
One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
I don’t see any of the people celebrating this decision discussing this? Perhaps it’s a misrepresentation by the author since I can’t find the actual decision text.
This is going to harm small non-corporate websites, not just social media, far more than Facebook or Tiktok. Harmful content is also going to include stuff like LGBTQ, especially anything trans related, and ‘antisemitism’ (but probably not antisemitism.)
The quote is from New Mexico AG Torrez.
https://nmdoj.gov/press-release/new-mexico-department-of-justice-wins-landmark-verdict-against-meta/
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This feels like an awful argument to make. It’s not the presence of those things that make Meta and co so shit, it’s the fact that they provably understood the risks and the effects that their design was having, knew that it was harming people, and continued to do it anyway. I don’t care if we’re talking about a little forum run by a Grandma and Grandpa talking about their jam recipes; if they know that they’re causing harm and don’t change their behavior, they should be liable.
The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.
But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.
Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.
Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.
“We designed, marketed, and sold the gun, but we didn’t think anyone would use it.”
Now now now, ladies and gentlemen, I’m just a simple country lawyer, and I sure love me some mashed potatoes. I love mashed potatoes; I eat them every day. I love mashed potatoes so much that, hell, I’ll have them with anything. I also love my gun, but I wouldn’t eat my gum! Hold for laughter Now what if I had mashed potatoes with my gun? Not like picks up revolver from displayed evidence and pantomimes using it as a fork, putting the barrel all up in his mouth. Jury roars with laughter. No. Imagine that I’m stuffing my mashed potatoes into this gun! There’s mashed potatoes in the barrel, mashed potatoes in the chambers, mashed potatoes gunking up the cylinder and hammer… Do you think this gun will fire? Of course not! I could point my mashed potato gun at anyone in this court muzzle sweeps the jury, and no one would even flinch. How could something that can be defeated by MASHED POTATOES be dangerous? Hell, how could a person holding such an impotent device have any sense of danger? Have you ever killed anybody with mashed potatoes? Have YOU?? We all know that opposing counsel’s argument that my client “intentionally shot, at point blank” my client’s own best friend. A best friend is someone you eat mashed potatoes with! Not murder and then “steal” their suspiciously unopened Star Wars memorabilia… This is why you need to return a verdict of “guilty” and award my client $50 million from the so-called “victim’s” family for psychological and emotional damages, as well as the cost of selflessly grinding up and eating his best friend’s body to save the family funeral costs. The prosecution rests.
It’s like if someone had a forum where insurrectionists were discussing how to build bombs and where they were going to use them, and the owners had an internal meeting where they said, “Hey, we’re hosting some pretty awful people, should we maybe report them or shut this down?” and the answer was, “Nah, they’re paying users, and we want their money.”
Pretty sure Section 230 wouldn’t protect them, either.
Yeah this feels very much like, “censor content, but don’t change Meta’s practices”
Which begs the question, does the author know what they’re cheering for?
You can bet they do.
It’s like he’s describing a slot machine with unpainted wheels, leaving out the context that it’s in a casino with a big “paint me and enjoy a share of the profit” sign above it.
The social media machine was designed to be a self-serve addiction generator. It intentionally used every trick it could legally get away with.
Also they can now generate content without users, which they already do a lot on Facebook.
I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.
Addictive Personality is a proposed set of traits that makes sufferers more vulnerable to developing addictive behaviors, including things like gambling or social media. Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Telling those people to just have self control is like telling someone with depression to just stop being sad.
Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Not at all: we don’t go winning lawsuits against any of those companies promoting themselves to appeal to the consumer because of how the dysfunctional among us may overconsume it. Liberty comes with accepting responsibility for reasonably foreseeable consequences/risks of our choices or no one will be able to realize liberty when someone makes their responsibility everyone else’s duty. Society can’t reasonably be expected to cater to everyone’s irrational/dysfunctional manifestations & whims. The legal standard is reasonable person, not dysfunctional ones. Moreover, the existence of children doesn’t imply we need to childproof all of society: people are still entitled liberty to their adult activity & vices.
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect. Even when kids grow obese from overeating junk food, blame primarily lies in whoever provides them that food rather than the product itself no matter how appealing the design of the food, the design on the container, or its advertisements. Especially with the latest wave of moral panic over social media, the risks & dysfunctions of obsessively overconsuming social media or any information service to the extent it impairs us are open & obvious. Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
Information & devices to view it are generally benign & noncoercive. People use these services, because some find them useful & engaging to their interests. Those features that effectively meet user demand for engaging information offer legitimate utility to a reasonable person without impairing them. Such features aren’t defects, and “fool-proofing” them would hamper utility to functional adults who can deal with the “dangers” of attention-grabbing information.
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Telling those people to just have self control is like telling someone with depression to just stop being sad.
We can’t rearrange all of society just because some people have depression. Liberty means not imposing on others issues we should be dealing with ourselves or through appropriate services specifically for that.
Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
While I do agree that parents should bear the brunt of the responsibility here, you must realize that kids are resourceful and no amount of parental oversight will stop a determined kid from accessing this content. Parents aren’t in their presence 24/7, and just like a kid whose parents deny them candy can find plenty of ways to obtain it without their parents knowing, the same is true for social media use. It’s the old adage that the more you tighten your grip, the more slips through your fingers.
liberty
You keep using that word, but this isn’t really about personal freedoms at all. It’s about companies that saw that their product was causing harm, and actively made the decision to continue promoting that harmful product in the name of profits. Their products were specifically engineered to cause these outcomes, and you’re defending their right to do that. Do you just propose we allow companies to do whatever they want in the name of profits, no matter the cost to society? If not, where do you draw the line? How much harm do they have to knowingly cause before you think it’s too much?
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect.
We restrict alcohol and cigarette use by underage people, too, actually, because their effects are known to be harmful, so I’m not sure what point you’re trying to make here. Nobody’s talking about making social media use illegal for adults.
Basically, I think you’re arguing against social media restrictions for kids which is fine but that’s a completely different discussion. It’s related, but it’s not what this article is about - this article is about holding corporations responsible for bad behavior. If that isn’t what you want to discuss, why are you here?
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Okay, I think you’re just not understanding the situation here. Meta did research on the effects of social media. They found that it was harmful. Even after determining that, they continued to promote it as not harmful. Zuckerberg even testified that that evidence that social media was harmful didn’t exist, after they had found evidence that it was. This all came to light because of whistleblower testimony. So even if we accept your premise here, that duty to inform was not met and that’s part of what’s at issue here.
Or telling someone stupid to be more clever, as the case may be.
Local wannabe crack dealer Mike Masnick says crack isn’t harmful, life without it would be boring. More at 11
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Bull fucking shit. This is not about platforms being held responsible for user content. This is about adding points and badges and achievements and all kinds of things designed to reward engagement with dopamine.
The author’s example of all content being drying paint would absolutely be addictive if the platform added an achievement for watching 10 different colours. Or: Congratulations, you’ve watched paint dry for 100 hours! As a reward, you get a new fancy emote! THAT is what these platforms do, and that is what is addictive. And that is what they’ve been convicted for.
Is not a loophole to get around section 230 as the author claims.
I’m not disagreeing with you when I say this; I just am not on social media other than lemmy and YouTube at this point so, I am out of the loop. What are these sites doing that gamifies watching content? I get all the other crap for posting content like likes and views. It incentivises content producers. How are viewers getting “likes and views” equivalent on Facebook?
What are these sites doing that gamifies watching content?
adding points and badges and achievements and all kinds of things designed to reward engagement with dopamine
Let’s not forget the years of literal psychological experiments that Meta conducted on its users to find out exactly what factors led to higher engagement.
This isn’t a simple message board. This is a highly-engineered, personalized content delivery system with the goal of serving as many ads as possible.
Surprise surprise. If you go through Techdirt’s archives, you can see Mike Masnick has spent thousands of words losing his shit any and every time Facebook has faced ANY criticism. I don’t know if he has a financial interest in them (like he does with Bluesky) but the moment someone suggests reining them in, here comes Masnick to defend one of the richest, most lawyer-ed up companies around.
Mike Masnick is on the Bluesky board of directors. Could this position be affecting his judgment on this specifically? because usually I expect Techdirt and Mike himself to be much more reasonable.
Yes, of course. Bluesky is also social media and so the precedent set by these cases will apply to it. Besides, knowledge of a subject does tend to affect your judgment.
Bluesky is also social media
So is Lemmy…
Yes, but everyone on Lemmy knows that the law only applies to the bad guys.
I was wondering the other day if Lemmy or Bluesky have any algorithms that are actively trying to keep users engaged?
Technically it does, it does use ranking algorithms, but they are for sorting and surfacing content rather than a modern “engagement optimization” system like a recommendation feed designed to maximize time spent.
Cool thing about Lemmy is you can just read the code and find out
IIRC somewhere they also explained it in plain English what the sorting methods do. My layman brain thinks that’s a kind of algorithm.
Kindly correct this layman if I’m misunderstanding :)
I’m also a layman, but I have read some discussions about this exact comparison. Essentially, the big mainstream sites often have personalized algorithms for each user that learns and adapts specifically to the user to feed the user whatever junk food content it can to keep them engaged. Algorithms on things like base lemmy or maybe reddit in the past just have a sort function like excel that propped up posts with more likes or more comments. You can see what other people are interested in, but it’s not targeting YOU. The predatory targeting algorithm can put a person into a self fulfilling echo chamber that in some ways resembles psychosis. This could naturally evolve into actual psychosis for individuals. I think the old verbage of “touch grass” was the prescription for fighting the effects. I think it’s a lot harder to “touch grass” when people are increasingly online and have fewer and fewer avenues to get out of their own echo chamber while staying online almost exclusively. I’m not an expert and the people I got this info from have no credentials I can source, but the logic seems sound to me. Anyone else with better credentials should weigh in if I’m wrong.
The Internet went from globalizing us to partitioning us pretty suddenly, and I think we are seeing the effects now.
Normally, I am all for Techdirt’s takes. But I think this one is off the mark a bit, because I legitimately think that infinite scroll and auto play are insidious, and actually harmful enough to be treated as a dangerous design decision.
The whole point of Section 230 is that communications companies can’t be held responsible for harmful things that people transmit on their networks, because it’s the people transmitting those harmful things that are actually at fault. And that would be reasonable in the initial stages of the Internet, when people posted on bulletin boards (or even early social media) and the harmful content had a much smaller reach. People had to “opt in”, essentially, to be exposed to this content, and if they stumble on something they find objectionable they can easily change their focus
But the purpose of the infinite scroll and auto play is to get people hooked on content. The algorithms exist to maximize engagement, regardless of the value of that engagement. I think the comparison to cigarettes is particularly apt. They are looking to hook people into actively harmful behaviors, for profit. And the algorithms don’t really differentiate between good engagement and harmful engagement. Anything that attracts the users attention is fair game.
The author’s points regarding how these rulings can be abused are correct, but that doesn’t negate how fundamentally harmful these addictive practices are. It will be up to lawmakers to make sure that the laws are drafted in such a way that they can be applied equitably… (So maybe we’re screwed after all…)
“For the children” tech laws should all be abolished. Why should I be burdened because you can’t be bothered to raise your own damned kids properly?
You’re right, because kids have been shown to listen to the parents all the time and have never had problems handling adult situations when their parents aren’t around 100% of the time. Even amazing parents raise kids who do stupid shit. And once these amazing parents aren’t around their kid 100% of the time, those kids are still kids and will make bad decisions. This is especially true when it is something that literally every person around them is doing (adults, kids, friends, celebrities).
Sure you are correct that parents can’t be there hovering at every moment to correct their kid everytime they make a mistake. At this point, it is easier to put controls that actually work on any internet connected device that you give them than any shenanigans that could get up to outside of supervision. Give them a a tablet with parental controls. It will be a better control than when they go to the corner and buy drugs or whatever is the real life equivalent. It’s never been easier for a parent to control their child’s online consumption than now and it will only get better. The offline risks aren’t really changing the same way.
We all did dumb shit as kids, but tech wasnt anywhere near what it is now.
These platforms need to be punished and held to account for the pervasive technology they have designed for profit, these things (FB, insta. Tiktok etc) shouldnt be able to exist in the first place in their current state. There were no guard rails put in place - just like the flood of AI, technology moves so much quicker than legislation can keep up and companys do really shitty things with that.
I believe it starts at a parenting level, but it’s much more difficult to manage these days compared to 20 years ago. Age verification bullshit is not the answer but parents need to be given some form of help againt these fuckers and their incredibly easy to access addiction machines.
I have a question. What if it’s not just at a parenting level. What if it’s also at a school in level? Because I think at least partially there is a disconnect between media and internet literacy and people of all ages including children and parents.
I think we’re going to need such skills going forward and that there exist places in the world where students are being taught such things and are benefiting from them significantly.
Yet the immediate knee jerk reactions seem to be blame the parents and blame the companies that facilitate the access to the content.
It doesn’t have to be a parents by themselves against the world system. But it also can’t just be a “companies protecting the children” system because that’s not what companies do or are for? The need to maintain a profit margin flies directly in the face of the aim to hold companies responsible and the laws seem to be intent on capping the monetary consequences of a breach of the law.
I do feel that the least a parent should be required to do before complaining to a governing body that they find someone else is “harming” their child is to show that they have done their due diligence to protect said child. We punish parents for willful negligence and child endangerment all the time. I don’t understand why this is different but I also wonder if there are other options for educating both children and adults that could help the situation significantly.
I think you make some good points here, but just for context, I do think that there is a level of responsibility on the parents here in combination with the companies. There’s plenty of “online literacy” classes that I think would be appropriate for adolescent education. I’m the unfortunate benefactor of needing to master cursive as a class one year and then typing the next year. Schools would be more beneficial if they included teaching kids internet literacy. They can probably drop some of the old stuff. They also don’t teach several other things like financial literacy in many situations (despite heavy capitalist leanings in real life). The education system sucks, but that is not an excuse to let iPad kids control my freedoms, and the root cause for age verification has never been about protecting children in the first place.
I absolutely agree that parents do play a role and have some responsibilities for both their and their childrens internet literacy, as well as for what their children access on the internet. I also agree that companies bear some responsibility (for making their platforms addictive on purpose in order to make money off of people they already know are underaged).
I just really want to put forth other ideas for fixing this problem that don’t involve companies being forced by law to enact ID verification when they can’t be trusted to safeguard such information and it feeds into the information database they already have, which will more than likely be used to violate the privacy of their users.
If the government absolutely must get involved making it illegal to produce and give access to a platform found to be addictive would be a start, but so would media and internet literacy education, both of which are solutions that don’t violate the privacy of minors or adults.
Digital media literacy is part of the education system in Denmark and some other European countries and it’s been beneficial to their populace. I think it could be a good solution.
deleted by creator
deleted by creator
deleted by creator
I guess in response to your last paragraph, the issue is the predatory nature of the attention addiction machines these companies make.
You could compare it to a child that got in to a van that had “free candy” written on the side. The door was open, if you assume someone was standing next to the van asking the kid to get in, that would be advertising. Now the kid gets abducted. Their “attention” is held hostage in the case of social media etc.
Now, would the parents have had to tell the kids to not get in to a van with free candy written on it for them to be able to report it to the police? Bad luck otherwise? Now what if every month a new van rocks up with more bells and whistles, its a different colour, its got flames down the side, whatever - the point is its different and cool and more appealing each time. More kids go missing. The “predators” have figured out what makes these kids tick and what makes them more likely to get in the van every time.
It’s a bit of an out there and confronting comparison but really, these companies are praying on your mental instead of your physical, which apparently is free game. They are still predators.
They know the harm their platforms cause, they suppress studies that report that harm, they cover it up, they fight tooth and nail and spend millions lobbying government to let them continue to do it.
Back on track sorry, schools are also responsible but you run in to the same issues once companies start targetting school kids like google did with chromebooks - the shittest PCs sold at a loss just so they could attempt to hook the younger generation in to their ecosystem of surveilance and advertising early.
Companies will NEVER protect the children. They will only ever protect shareholders, profits and their pedo CEOs.
Real change will only ever come from real (not sponsored) education, government legislation that isnt bullshit (I dont know what this would look like but ID checking isnt it) and holding the tech bros increasingly accountable for their fucked up apps.
So, for the “it’s the parents fault” bit I’ll say this. Parents are the arbiters of Internet access in their homes. If that van with “Free Candy” written on it pulled into their driveway and they didn’t call the police or warn their children not to get in the van, yes I would consider them liable.
The fact is, lots of parents do know their children are using social media like Facebook, Instagram, Tik Tok etc. A lot of parents are my age and younger (the age where we grew up with the internet and social media in its toddler years if not it’s infancy). A lot of us do know the dangers (and are probably addicted ourselves).
What some of us may lack is the knowledge to use parental controls effectively (and at least some of that is because we do dumb shit like using the same password for everything, or not changing default passwords).
But I also think that some of us (looking at you collective shout and other organizations like it) just want to offload our responsibilities onto these companies so we have someone to blame.
And even though I agree that what these companies are doing is wrong (directly targeting minors, deliberately making their platforms addictive, collecting data on minors etc), and I want them held accountable, I also don’t think ID collection is warranted, and I view this as a way to violate privacy and collect data for surveillance purposes which I believe is wrong to do to people who haven’t done anything illegal.
Even if that weren’t the case, these companies also just cannot be trusted to safeguard the PII data they’re wanting to collect. So as far as I’m concerned the ID verification thing is just not going to work.
I agree with most of that, however the mentality for liability is the same as “well what was she wearing” victim blaming. The parents arent the perpetrators.
I agree that parental knowledge to properly moderate kids usage of the internet is an issue - a skill issue. But that doesnt mean its their fault the kids get addicted to these things and exploited. The ones who openly do not care are a different story, thats child neglent as far as I am concerned.
I agree age verification and ID checks are absolutely not the answer and trying to censor the internet is not the answer. I think the answer more likely lies in holding companies accountable. There are reasons standards exist in many industries - to protect the consumer. As far as I am aware no standards exist when dealing with social media platforms.
Apologies I had a technical difficulty and posted the same comment several times.
Kids should be banned from the platforms.
But that requires the tools to do so. And then we are back at checking on ages and identities.
In truth this is part and parcel of age controls as an excuse to id everyone.
This is probably an extreme take, but kids shouldn’t be anywhere near a tablet while they’re still really young especially.
It kind of a tough balance. Yes, unrestricted tech use is an issue for young children but on the other hand using tech while young is the best way to make it a natural part of your experience of the world and tech isn’t going away. If you go to the other extreme and say no whatever sort of tech period before a certain age are you setting the kid back against more tech literate peers? There’s also the consideration that’s been discussed around alcohol forever. By making it an “adult thing” and effectively a rite of passage to drink alcohol do you cause more problems and abuse in young adults than if it was always a part of their experience and the focus was on responsible use instead of total abstinence?
I completely agree. It would be amazing if we could nationally or even globally enforce age restrictions to give an internet kiddy pool to let children learn and grow in a safe online environment. We live in a time where the people who are pushing this in the government should not be trusted to use that information for the real reason. “For the kids” is all made up and not helping kids. Giving up privacy in order to not help kids really highlights how corrupt the people pushing for this are.
The author reads like he doesn’t understand context or the legal idea of a rational actor. What users are going to purposefully upload boring content?
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This guy has an addiction lol ironic












