Yeah don’t get me wrong, I 100% support the stated intent of the ban. It’s just a terrible method to go about it. Facebook is known to have commissioned internal studies about the psychological effects of changes to their algorithm, and then when those studies show the change causes harm, but also produce a little more profit, they go with the profit every time. Why don’t we make that illegal?
If we have to do age-gating, why not require it to be done in a privacy-preserving way, such as parental controls, zero-knowledge proofs, or blind signatures? Parental controls would, in fact, be by far the easiest for everyone involved, and the only information that would actually need to flow is from the parent to their kids’ devices, and then the devices reporting “yes, this is a child” or “no, this is not a child”.
The answer is: because the government didn’t care. It didn’t want to actually fix the problem. It didn’t want to listen to experts’ opinions or consider the broader public’s concerns. It wanted to win some quick easy PR. That’s why submissions into the legislation were open for just one day, and why Parliament didn’t even take the time to consider the small number of submissions that were able to be made in that limited window. A government that is acting seriously in response to a chronic threat (I can make some exception for quick responses into sudden, unexpected, acute crises) does not behave in this way. Ever. Good legislation takes time, and this sort of hurried response only indicates that it knew it was doing the wrong thing, and wanted to minimise the amount of time it was exposed to criticism.
Parental controls can still be used to isolate if the parents are extremely religious or abusive but this is a much more reasonable and effective way to go about it.
Will some kids get around it? Yes, some kids will get around whatever. They will also get around this impending legislation.
Not seen any of the polling, the comments under politicians’ Facebook posts about it, or heard interviews on the ABC? It most definitely is a very popular policy. Unfortunately, that’s mostly because of a lack of understanding of the nuance. They just see “social media = bad for kids, therefore this bill that says it’s going to stop that must be good”.
Because aim of this low is not to protect kids, but to erase last drops of privacy in the internet. Just another brick in the wall. Kids are just collateral.
Honestly I don’t think so. It would have been so easy for the politicians to not include a rule that specifically bars sites from using government ID as their only age verification method. And to not include a stipulation that any information gathered for age verification must not be used for any other purposes. But they did include those.
Hanlon’s Razor seems the best thing to apply here. There’s a lot of evidence of incompetence. Not a lot of good evidence of nefarious purpose.
totally agree. we do need a more gooder way to verify appropriate-ness, but its always going to br more difficult when none of the organisations that make money have good reasons to not do it/do it thoughtfully.
Yeah don’t get me wrong, I 100% support the stated intent of the ban. It’s just a terrible method to go about it. Facebook is known to have commissioned internal studies about the psychological effects of changes to their algorithm, and then when those studies show the change causes harm, but also produce a little more profit, they go with the profit every time. Why don’t we make that illegal?
If we have to do age-gating, why not require it to be done in a privacy-preserving way, such as parental controls, zero-knowledge proofs, or blind signatures? Parental controls would, in fact, be by far the easiest for everyone involved, and the only information that would actually need to flow is from the parent to their kids’ devices, and then the devices reporting “yes, this is a child” or “no, this is not a child”.
The answer is: because the government didn’t care. It didn’t want to actually fix the problem. It didn’t want to listen to experts’ opinions or consider the broader public’s concerns. It wanted to win some quick easy PR. That’s why submissions into the legislation were open for just one day, and why Parliament didn’t even take the time to consider the small number of submissions that were able to be made in that limited window. A government that is acting seriously in response to a chronic threat (I can make some exception for quick responses into sudden, unexpected, acute crises) does not behave in this way. Ever. Good legislation takes time, and this sort of hurried response only indicates that it knew it was doing the wrong thing, and wanted to minimise the amount of time it was exposed to criticism.
👆 All of this.
Parental controls can still be used to isolate if the parents are extremely religious or abusive but this is a much more reasonable and effective way to go about it.
Will some kids get around it? Yes, some kids will get around whatever. They will also get around this impending legislation.
This is what baffles me. I may be out of touch, but I haven’t seen a single response in support of the thing. Seems more like a big PR fail to me.
Not seen any of the polling, the comments under politicians’ Facebook posts about it, or heard interviews on the ABC? It most definitely is a very popular policy. Unfortunately, that’s mostly because of a lack of understanding of the nuance. They just see “social media = bad for kids, therefore this bill that says it’s going to stop that must be good”.
I guess I am out of touch then.
Because aim of this low is not to protect kids, but to erase last drops of privacy in the internet. Just another brick in the wall. Kids are just collateral.
Honestly I don’t think so. It would have been so easy for the politicians to not include a rule that specifically bars sites from using government ID as their only age verification method. And to not include a stipulation that any information gathered for age verification must not be used for any other purposes. But they did include those.
Hanlon’s Razor seems the best thing to apply here. There’s a lot of evidence of incompetence. Not a lot of good evidence of nefarious purpose.
totally agree. we do need a more gooder way to verify appropriate-ness, but its always going to br more difficult when none of the organisations that make money have good reasons to not do it/do it thoughtfully.