

I genuinely feel like half my brain just atrophied watching this guy.


I genuinely feel like half my brain just atrophied watching this guy.


Turns out there’s not actually much functionality in these at all. An RFID reader and an RGB LED, whoop-de-shit.
Where did you get that idea? They have an RFID reader and LED, yes, but they also have a speaker, microphone, accelerometer, light and color sensor, near-field magnetic position detection, and then have to fit the battery alongside all of that, all in a 2x4 brick.
Here’s an example of what cutting-edge brick tech could look like.
That brick has a fixed option for what it displays without needing to be entirely reflashed, requires a 4x8 powered baseplate to operate, and compared to the smart brick, doesn’t have RFID, LEDs, sound, color, or light sensing capabilities, no accelerometer, and no ability to detect other bricks near it, along with having no internal battery.
The smart brick can play different (fully interchangeable without firmware reflashing) sounds based on nearby minifigures and interactable buttons and levers, can display lights and sounds based on rotation and movement, can change how it interacts based on nearby smart bricks, and can also be charged wirelessly and operate standalone. And of course, it’ll be able to respond to sounds later on too.
The brick from hackaday has a display. That’s it. It’s cool, yes, but it’s nowhere close to the smart brick.


Before people get all up in arms about the non-replaceable battery… Do you know how small a LEGO brick is? For them to pack all this functionality in there, they have to be EXTREMELY careful with how they use every millimeter of space, and they have to make sure a kid won’t just… pop open the bottom of the brick and eat the battery or something.
The article itself even states:
As you can see in JerryRigEverything’s destructive teardown, it’s difficult to even get at the battery without going through thin, hair-like antennas.
Break even one of them and the entire brick is nonfunctional.


Make sure to sign up via a creator’s link! (the ones they’ll put in the sponsored section of a video where they are “sponsored” by Nebula as one of Nebula’s creators)
Gets you a pretty good discount and drops it to about 30 bucks a year.


True, but that also depends on the circumstance.
Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?
I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”
Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?
I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.


I’m not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.
The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.
If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it’s probably the flu and tells me to mask up for a bit, that’s probably gonna be better than that person being told “I’m sorry, I can’t answer that”
At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.
I feel like I’d much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.
Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else’s accidental misinformation?
But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?
I’m not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be “you can’t know”, and they’re not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.


The insurance company is going to have a doctor who said you don’t need it.
To add on to this, my psychologist told me that he’s had antipsychotic meds denied by a urologist before, because the insurance companies often don’t actually care what field the doctor is in. All they care about is getting to say “a doctor” reviewed it.


AND write a strongly worded letter! They’ll back down after THAT!



I think Cory Doctorow does a good job of explaining this when he talks about Naomi Klein’s book. They both call it a “mirror world” of political beliefs.
Qanon’s obsession with “child trafficking” is a mirror-world version of the real crises of child poverty, child labor, border family separations and kids in cages. Anti-vax is the mirror-world version of the true story of the Sacklers and their fellow opioid barons making billions on Oxy and fent, with the collusion of corrupt FDA officials and a pliant bankruptcy court system. Xenophobic panic about “immigrants stealing jobs” is the mirror world version of the well-documented fact that big business shipped jobs to low-waged territories abroad, weakening US labor and smashing US unions. Cryptocurrency talk about “decentralization” is the mirror-world version of the decay of every industry (including tech) into a monopoly or a cartel.
It’s easy to be convinced by that type of logic. I used to be heavily into cryptocurrency because I saw the failure of capitalism to protect us from corporate consolidation and monopolies, so I assumed “this system that decries centralized authorities must be better.”
They were only half right though. It’s true corporate consolidation was a problem, and the centralization it brought causes issues, but the reason that consolidation happens is because of capital, which crypto is very much not against, and heavily supports, through tokens that give you ownership over a share of all the income a protocol generates, even if that protocol could run just fine indefinitely on-chain without paying you a fee.
People slowly accumulate more wealth, more voting power, and eventually control how these “decentralized” protocols operate.
In the same way, MAGA thinking has the same problem, where they’ll correctly identify an issue or motive, but entirely misjudge what the cause of it is, will fight the wrong enemies (or worse, support their true enemies), and only later will they realize things have just kept getting worse.


I’m honestly surprised they never got hit for this. It’s one thing for our antitrust system to be shit, but to look at a policy that explicitly states “you have to give us the best possible price otherwise we will kick you off our platform and take away the majority of your possible customers” isn’t even burying the lead at all.


Same here.
Go to Android Developer Settings > Display Cutout, set it to one of the other options and it should shift the app down a bit so you can access the buttons. (change it back after ofc)
I used “waterfall cutout” but others might work depending on your phone model. Afaik no other fix is possible without the app’s code itself being modified.


So if my fist spontaneously uses force against a fascist’s face, it’s legal? Oh wait, I forgot, laws only apply to the people who don’t like fascists now.


Yep, also owned by archive.today.
As is archive.is, archive.fo, archive.li, archive.md, and archive.vn.


The main point of a grace window is to give ICE agents a choice where quitting is the best option, as opposed to incentivizing doubling down.
If it works immediately, they’ll just go “welp, nothing I can do about it now, might as well stay with ICE and keep collecting a paycheck since even if I quit it won’t matter!” vs. “damn, maybe I shouldn’t take the risk and I should get a different job now instead before that 30 day window closes in on me”
Personally, I’d like to see a mix to have some immediate punishment and some optional (if they quit) punishment too. (e.g. an extra tax on all your income for the next x number of years if you were an ICE agent at any time during the stated period, whether you quit later or not, plus being banned only if you continue your employment there)


Good news, someone’s already trying to get a bill like that passed in California!
It’s the “GTFO ICE” bill. I’m not joking, that is the actual name.
A Southern California lawmaker is behind new legislation that would disqualify U.S. Immigration and Customs Enforcement agents or other law enforcement personnel who engage in immigration enforcement activities from being hired as a local, county or state public agency employee in California.


Unfortunately Anubis wouldn’t stop the bots, it would just slow them down.
Anubis just adds proof of work, AKA computation, to your requests. It’s why your browser takes a second before it can access the site. It’s nothing for things on your scale, but it’s a fuck ton of time and money for large scraping operations accessing millions of links every day.
For a bot submitting PRs though, it’s not gonna be a meaningful hindrance unless the person is specifically running a bot designed to make thousands of PRs every day, which a lot of these aren’t.
Really unfortunate.


UPDATE: The article has now linked to the newly published study. It claims a maximum concentration of bisphenols of 351mg/kg, above the 10mg/kg limit proposed by ECHA, but they don’t give any concrete numbers on how likely any of those bisphenols are to actually leech from the product into your body. The average sum of all bisphenols/sample was just 15. They note the parts not touching the skin often had more bisphenols than the parts actually touching the skin, with about 50% more of those areas than the non-skin-contacting ones being put in their “green” category, meaning it’s fairly in compliance with most protective standards.
Of the parts touching the skin, 68% were green, 21% yellow, and 11% red.
And onto flame retardants, 100% of products with HFRs were green, and 84% with OPFRs were green.
For pthalates, 87% were green, and less than 1% were red.
Essentially, the TLDR is that most of the things they tested either met most standards, were very close to meeting them, or technically didn’t meet standards but mostly just in areas that didn’t even come in contact with the skin at all. AKA, it’s mostly overblown.
Original Post:
No source linked by the article, no visible press releases that don’t just pretend to be a real press release while citing the articles, no official blog posts, and the only official sounding mention of this that comes from a more direct source is a coalition on linkedin saying a person at a sub-group of the broader project was gonna talk with them about it.
No stats, no numbers, just “they found it” in the headphones.
You could find a chemical well under the safe limit in drinking water, and say “we found x in your water” and make a big scare of it when it’s not a big deal.
While I have no doubt BPA and its counterparts could be used in manufacturing of headphones, without any actual data, this is literally no better than when your uncle at Thanksgiving starts yapping about how the government found some data one time and that means you should never drink tap water again.


I hate Meta too, but:
Unreleased Meta product
the company says it was never launched, as a result of that testing.
This isn’t as bad as people are making it out to be.
Sure, it’s a problem Meta is releasing technology we know can be damaging and go off the rails, and yes, their chatbots have literally flirted with children before, but this specific instance isn’t that bad given they just… didn’t launch it after finding out it wasn’t working as it should.


Gotta keep those engagement numbers up.
It’s also not as SEO-gameable (since fediverse domains are inherently more fragmented than a large, high-reputation domain for SEO algorithms to rank highly), and doesn’t have an inherent monetization system (unlike platforms like Twitter with their ad payouts), so that’s a couple more things going for us.