First things first: Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust & safety, all while prioritiz…
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This feels like an awful argument to make. It’s not the presence of those things that make Meta and co so shit, it’s the fact that they provably understood the risks and the effects that their design was having, knew that it was harming people, and continued to do it anyway. I don’t care if we’re talking about a little forum run by a Grandma and Grandpa talking about their jam recipes; if they know that they’re causing harm and don’t change their behavior, they should be liable.
The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.
But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.
Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.
Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.
Now now now, ladies and gentlemen, I’m just a simple country lawyer, and I sure love me some mashed potatoes. I love mashed potatoes; I eat them every day. I love mashed potatoes so much that, hell, I’ll have them with anything. I also love my gun, but I wouldn’t eat my gum! Hold for laughter Now what if I had mashed potatoes with my gun? Not like picks up revolver from displayed evidence and pantomimes using it as a fork, putting the barrel all up in his mouth. Jury roars with laughter. No. Imagine that I’m stuffing my mashed potatoes into this gun! There’s mashed potatoes in the barrel, mashed potatoes in the chambers, mashed potatoes gunking up the cylinder and hammer… Do you think this gun will fire? Of course not! I could point my mashed potato gun at anyone in this court muzzle sweeps the jury, and no one would even flinch. How could something that can be defeated by MASHED POTATOES be dangerous? Hell, how could a person holding such an impotent device have any sense of danger? Have you ever killed anybody with mashed potatoes? Have YOU?? We all know that opposing counsel’s argument that my client “intentionally shot, at point blank” my client’s own best friend. A best friend is someone you eat mashed potatoes with! Not murder and then “steal” their suspiciously unopened Star Wars memorabilia… This is why you need to return a verdict of “guilty” and award my client $50 million from the so-called “victim’s” family for psychological and emotional damages, as well as the cost of selflessly grinding up and eating his best friend’s body to save the family funeral costs. The prosecution rests.
It’s like if someone had a forum where insurrectionists were discussing how to build bombs and where they were going to use them, and the owners had an internal meeting where they said, “Hey, we’re hosting some pretty awful people, should we maybe report them or shut this down?” and the answer was, “Nah, they’re paying users, and we want their money.”
It’s like he’s describing a slot machine with unpainted wheels, leaving out the context that it’s in a casino with a big “paint me and enjoy a share of the profit” sign above it.
The social media machine was designed to be a self-serve addiction generator. It intentionally used every trick it could legally get away with.
I don’t know.
Seems like self-control issues.
People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise.
I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach.
The harm seems phony & averted by basic self-control.
Addictive Personality is a proposed set of traits that makes sufferers more vulnerable to developing addictive behaviors, including things like gambling or social media. Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Telling those people to just have self control is like telling someone with depression to just stop being sad.
Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Not at all: we don’t go winning lawsuits against any of those companies promoting themselves to appeal to the consumer because of how the dysfunctional among us may overconsume it.
Liberty comes with accepting responsibility for reasonably foreseeable consequences/risks of our choices or no one will be able to realize liberty when someone makes their responsibility everyone else’s duty.
Society can’t reasonably be expected to cater to everyone’s irrational/dysfunctional manifestations & whims.
The legal standard is reasonable person, not dysfunctional ones.
Moreover, the existence of children doesn’t imply we need to childproof all of society: people are still entitled liberty to their adult activity & vices.
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect.
Even when kids grow obese from overeating junk food, blame primarily lies in whoever provides them that food rather than the product itself no matter how appealing the design of the food, the design on the container, or its advertisements.
Especially with the latest wave of moral panic over social media, the risks & dysfunctions of obsessively overconsuming social media or any information service to the extent it impairs us are open & obvious.
Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
Information & devices to view it are generally benign & noncoercive.
People use these services, because some find them useful & engaging to their interests.
Those features that effectively meet user demand for engaging information offer legitimate utility to a reasonable person without impairing them.
Such features aren’t defects, and “fool-proofing” them would hamper utility to functional adults who can deal with the “dangers” of attention-grabbing information.
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning.
Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Telling those people to just have self control is like telling someone with depression to just stop being sad.
We can’t rearrange all of society just because some people have depression.
Liberty means not imposing on others issues we should be dealing with ourselves or through appropriate services specifically for that.
Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
While I do agree that parents should bear the brunt of the responsibility here, you must realize that kids are resourceful and no amount of parental oversight will stop a determined kid from accessing this content. Parents aren’t in their presence 24/7, and just like a kid whose parents deny them candy can find plenty of ways to obtain it without their parents knowing, the same is true for social media use. It’s the old adage that the more you tighten your grip, the more slips through your fingers.
liberty
You keep using that word, but this isn’t really about personal freedoms at all. It’s about companies that saw that their product was causing harm, and actively made the decision to continue promoting that harmful product in the name of profits. Their products were specifically engineered to cause these outcomes, and you’re defending their right to do that. Do you just propose we allow companies to do whatever they want in the name of profits, no matter the cost to society? If not, where do you draw the line? How much harm do they have to knowingly cause before you think it’s too much?
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect.
We restrict alcohol and cigarette use by underage people, too, actually, because their effects are known to be harmful, so I’m not sure what point you’re trying to make here. Nobody’s talking about making social media use illegal for adults.
Basically, I think you’re arguing against social media restrictions for kids which is fine but that’s a completely different discussion. It’s related, but it’s not what this article is about - this article is about holding corporations responsible for bad behavior. If that isn’t what you want to discuss, why are you here?
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Okay, I think you’re just not understanding the situation here. Meta did research on the effects of social media. They found that it was harmful. Even after determining that, they continued to promote it as not harmful. Zuckerberg even testified that that evidence that social media was harmful didn’t exist, after they had found evidence that it was. This all came to light because of whistleblower testimony. So even if we accept your premise here, that duty to inform was not met and that’s part of what’s at issue here.
This feels like an awful argument to make. It’s not the presence of those things that make Meta and co so shit, it’s the fact that they provably understood the risks and the effects that their design was having, knew that it was harming people, and continued to do it anyway. I don’t care if we’re talking about a little forum run by a Grandma and Grandpa talking about their jam recipes; if they know that they’re causing harm and don’t change their behavior, they should be liable.
The harm doesn’t come from the aspects of infinite scroll, auto play, or algorithmic examples in a vacuum.
But we have statistically proven that when you gamify the system and the content can be considered harmful to consume too much of, those two factors are what makes it dangerous.
Tricking the brain into doing something harmful to itself by gamification is the problem. The algorithm, auto play and infinite scroll are just mechanisms to facilitate that. Novelty only plays a small part in that. The algorithm by itself doesn’t provide a dopamine hit. The infinite scroll by itself doesn’t provide a dopamine hit. The auto play feature by itself doesn’t cause a dopamine hit.
Even when you combine all three the dopamine hit won’t come if the content being pushed isn’t sufficient to cause a rush of dopamine. And that dopamine rush often comes from things like upvotes and downvotes, and badges, and achievements. Follower counts and other metrics that the individual users use to get dopamine are being weaponized against them to make money. And it was intentional on the part of meta execs.
“We designed, marketed, and sold the gun, but we didn’t think anyone would use it.”
https://youtu.be/ekg45ub8bsk?t=52
Entire clip: https://youtu.be/ekg45ub8bsk
It’s like if someone had a forum where insurrectionists were discussing how to build bombs and where they were going to use them, and the owners had an internal meeting where they said, “Hey, we’re hosting some pretty awful people, should we maybe report them or shut this down?” and the answer was, “Nah, they’re paying users, and we want their money.”
Pretty sure Section 230 wouldn’t protect them, either.
Yeah this feels very much like, “censor content, but don’t change Meta’s practices”
Which begs the question, does the author know what they’re cheering for?
You can bet they do.
It’s like he’s describing a slot machine with unpainted wheels, leaving out the context that it’s in a casino with a big “paint me and enjoy a share of the profit” sign above it.
The social media machine was designed to be a self-serve addiction generator. It intentionally used every trick it could legally get away with.
Also they can now generate content without users, which they already do a lot on Facebook.
I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.
Addictive Personality is a proposed set of traits that makes sufferers more vulnerable to developing addictive behaviors, including things like gambling or social media. Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Telling those people to just have self control is like telling someone with depression to just stop being sad.
Not at all: we don’t go winning lawsuits against any of those companies promoting themselves to appeal to the consumer because of how the dysfunctional among us may overconsume it. Liberty comes with accepting responsibility for reasonably foreseeable consequences/risks of our choices or no one will be able to realize liberty when someone makes their responsibility everyone else’s duty. Society can’t reasonably be expected to cater to everyone’s irrational/dysfunctional manifestations & whims. The legal standard is reasonable person, not dysfunctional ones. Moreover, the existence of children doesn’t imply we need to childproof all of society: people are still entitled liberty to their adult activity & vices.
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect. Even when kids grow obese from overeating junk food, blame primarily lies in whoever provides them that food rather than the product itself no matter how appealing the design of the food, the design on the container, or its advertisements. Especially with the latest wave of moral panic over social media, the risks & dysfunctions of obsessively overconsuming social media or any information service to the extent it impairs us are open & obvious. Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
Information & devices to view it are generally benign & noncoercive. People use these services, because some find them useful & engaging to their interests. Those features that effectively meet user demand for engaging information offer legitimate utility to a reasonable person without impairing them. Such features aren’t defects, and “fool-proofing” them would hamper utility to functional adults who can deal with the “dangers” of attention-grabbing information.
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
We can’t rearrange all of society just because some people have depression. Liberty means not imposing on others issues we should be dealing with ourselves or through appropriate services specifically for that.
While I do agree that parents should bear the brunt of the responsibility here, you must realize that kids are resourceful and no amount of parental oversight will stop a determined kid from accessing this content. Parents aren’t in their presence 24/7, and just like a kid whose parents deny them candy can find plenty of ways to obtain it without their parents knowing, the same is true for social media use. It’s the old adage that the more you tighten your grip, the more slips through your fingers.
You keep using that word, but this isn’t really about personal freedoms at all. It’s about companies that saw that their product was causing harm, and actively made the decision to continue promoting that harmful product in the name of profits. Their products were specifically engineered to cause these outcomes, and you’re defending their right to do that. Do you just propose we allow companies to do whatever they want in the name of profits, no matter the cost to society? If not, where do you draw the line? How much harm do they have to knowingly cause before you think it’s too much?
We restrict alcohol and cigarette use by underage people, too, actually, because their effects are known to be harmful, so I’m not sure what point you’re trying to make here. Nobody’s talking about making social media use illegal for adults.
Basically, I think you’re arguing against social media restrictions for kids which is fine but that’s a completely different discussion. It’s related, but it’s not what this article is about - this article is about holding corporations responsible for bad behavior. If that isn’t what you want to discuss, why are you here?
Okay, I think you’re just not understanding the situation here. Meta did research on the effects of social media. They found that it was harmful. Even after determining that, they continued to promote it as not harmful. Zuckerberg even testified that that evidence that social media was harmful didn’t exist, after they had found evidence that it was. This all came to light because of whistleblower testimony. So even if we accept your premise here, that duty to inform was not met and that’s part of what’s at issue here.
Or telling someone stupid to be more clever, as the case may be.