AmbitiousProcess (they/them)

  • 0 Posts
  • 109 Comments
Joined 9 months ago
cake
Cake day: June 6th, 2025

help-circle


  • Turns out there’s not actually much functionality in these at all. An RFID reader and an RGB LED, whoop-de-shit.

    Where did you get that idea? They have an RFID reader and LED, yes, but they also have a speaker, microphone, accelerometer, light and color sensor, near-field magnetic position detection, and then have to fit the battery alongside all of that, all in a 2x4 brick.

    Here’s an example of what cutting-edge brick tech could look like.

    That brick has a fixed option for what it displays without needing to be entirely reflashed, requires a 4x8 powered baseplate to operate, and compared to the smart brick, doesn’t have RFID, LEDs, sound, color, or light sensing capabilities, no accelerometer, and no ability to detect other bricks near it, along with having no internal battery.

    The smart brick can play different (fully interchangeable without firmware reflashing) sounds based on nearby minifigures and interactable buttons and levers, can display lights and sounds based on rotation and movement, can change how it interacts based on nearby smart bricks, and can also be charged wirelessly and operate standalone. And of course, it’ll be able to respond to sounds later on too.

    The brick from hackaday has a display. That’s it. It’s cool, yes, but it’s nowhere close to the smart brick.




  • True, but that also depends on the circumstance.

    Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?

    I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”

    Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?

    I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.


  • I’m not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

    The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

    If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it’s probably the flu and tells me to mask up for a bit, that’s probably gonna be better than that person being told “I’m sorry, I can’t answer that”

    At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

    I feel like I’d much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

    Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else’s accidental misinformation?

    But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

    I’m not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be “you can’t know”, and they’re not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.




  • I think Cory Doctorow does a good job of explaining this when he talks about Naomi Klein’s book. They both call it a “mirror world” of political beliefs.

    Qanon’s obsession with “child trafficking” is a mirror-world version of the real crises of child poverty, child labor, border family separations and kids in cages. Anti-vax is the mirror-world version of the true story of the Sacklers and their fellow opioid barons making billions on Oxy and fent, with the collusion of corrupt FDA officials and a pliant bankruptcy court system. Xenophobic panic about “immigrants stealing jobs” is the mirror world version of the well-documented fact that big business shipped jobs to low-waged territories abroad, weakening US labor and smashing US unions. Cryptocurrency talk about “decentralization” is the mirror-world version of the decay of every industry (including tech) into a monopoly or a cartel.

    It’s easy to be convinced by that type of logic. I used to be heavily into cryptocurrency because I saw the failure of capitalism to protect us from corporate consolidation and monopolies, so I assumed “this system that decries centralized authorities must be better.”

    They were only half right though. It’s true corporate consolidation was a problem, and the centralization it brought causes issues, but the reason that consolidation happens is because of capital, which crypto is very much not against, and heavily supports, through tokens that give you ownership over a share of all the income a protocol generates, even if that protocol could run just fine indefinitely on-chain without paying you a fee.

    People slowly accumulate more wealth, more voting power, and eventually control how these “decentralized” protocols operate.

    In the same way, MAGA thinking has the same problem, where they’ll correctly identify an issue or motive, but entirely misjudge what the cause of it is, will fight the wrong enemies (or worse, support their true enemies), and only later will they realize things have just kept getting worse.






  • The main point of a grace window is to give ICE agents a choice where quitting is the best option, as opposed to incentivizing doubling down.

    If it works immediately, they’ll just go “welp, nothing I can do about it now, might as well stay with ICE and keep collecting a paycheck since even if I quit it won’t matter!” vs. “damn, maybe I shouldn’t take the risk and I should get a different job now instead before that 30 day window closes in on me”

    Personally, I’d like to see a mix to have some immediate punishment and some optional (if they quit) punishment too. (e.g. an extra tax on all your income for the next x number of years if you were an ICE agent at any time during the stated period, whether you quit later or not, plus being banned only if you continue your employment there)




  • UPDATE: The article has now linked to the newly published study. It claims a maximum concentration of bisphenols of 351mg/kg, above the 10mg/kg limit proposed by ECHA, but they don’t give any concrete numbers on how likely any of those bisphenols are to actually leech from the product into your body. The average sum of all bisphenols/sample was just 15. They note the parts not touching the skin often had more bisphenols than the parts actually touching the skin, with about 50% more of those areas than the non-skin-contacting ones being put in their “green” category, meaning it’s fairly in compliance with most protective standards.

    Of the parts touching the skin, 68% were green, 21% yellow, and 11% red.

    And onto flame retardants, 100% of products with HFRs were green, and 84% with OPFRs were green.

    For pthalates, 87% were green, and less than 1% were red.

    Essentially, the TLDR is that most of the things they tested either met most standards, were very close to meeting them, or technically didn’t meet standards but mostly just in areas that didn’t even come in contact with the skin at all. AKA, it’s mostly overblown.

    Original Post:
    No source linked by the article, no visible press releases that don’t just pretend to be a real press release while citing the articles, no official blog posts, and the only official sounding mention of this that comes from a more direct source is a coalition on linkedin saying a person at a sub-group of the broader project was gonna talk with them about it.

    No stats, no numbers, just “they found it” in the headphones.

    You could find a chemical well under the safe limit in drinking water, and say “we found x in your water” and make a big scare of it when it’s not a big deal.

    While I have no doubt BPA and its counterparts could be used in manufacturing of headphones, without any actual data, this is literally no better than when your uncle at Thanksgiving starts yapping about how the government found some data one time and that means you should never drink tap water again.