

Will a tesla even start if the internet is down?


Will a tesla even start if the internet is down?


Hey, I just thought of a brilliant idea. Instead of sticking that under a mat under all the other shit that’s probably stored in the storage pocket, maybe they could put it somewhere more easily accessible, like above the door pocket entirely. And instead of a steel cable to pull, maybe they could use some sort of lever that pulls the cable without needing to see it. And since that is so easy to access, the normal way of opening the door that requires power becomes redundant and could be removed to save costs.
Oh but the electronic opener also lowers the window slightly otherwise it’ll break because the window makes a part of the seal and they couldn’t design it in a way that would work with the normal window position? Why would you do that? So there’s a chance that opening the door in winter will smash the window because sometimes those mechanisms freeze in the cold? Or do they constantly run heaters to avoid this?


Woah there, they didn’t add a keyboard button for AI. They replaced a button for it. My shitty windows laptop only has one ctrl key to add this other key I’ll only ever accidentally press, just like the first fucking windows key I didn’t want.


The manifesto mentions this and that tooling had been made by volunteers but leadership ignored or rejected it (wasn’t clear which). So it seems that they are firing their leadership for the same reasons you want to stay away, which is a good sign, at least. Like promising that they are willing to mutiny to stop the enshitification.


She had her hearing ear dog with her but just ignored its signal barks! That’s why the dog wasn’t kicked off the plane because those involved saw it was trying to do its job and didn’t deserve to miss its flight.


Training program involves treats attached to the dogs’ tails.


Lmao, he’s going to campaign on stopping the war in Iran, isn’t he?


What a great rebuttal, a no followed by a condescending insult.


Don’t forget that people who can make the decisions can also bet.


I dunno, I find it hard to respect laws intending to protect people from their own choices, especially when the majority of people can enjoy the thing (or just ignore it on their own) without any problems.
Try to idiot-proof the world and the world just comes up with a better idiot.


Thinking you can say something and avoid it being challenged by adding shit like “anything you can argue against it doesn’t matter” is the insufferable thing on display here. Almost as insufferable as another person chiming in about how insufferable those who won’t just take that at face value are.


Is that offer still open to friendly nations? As I understand it, they have been mining the strait, and things just seem a bit chaotic to mine it in a way that leaves a safe and known path, plus how to communicate that path to those they want to let through while denying those they don’t. Unless the minea are remote instead of proximity, but I think part of the point was to make the embargo passive so that carpet bombing the area wouldn’t be an effective counter.


Stop when you feel like it, just like any other verification method. You don’t really prove that there are no problems with software development, it’s more of a “try to think of any problem you can and do your best to make sure it doesn’t have any of those problems” plus “just run it a lot and fix any problems that come up”.
An LLM is just another approach to finding potential problems. And it will eventually say everything looks good, though not because everything is good but because that happens in its training data and eventually that will become the best correlated tokens (assuming it doesn’t get stuck flipping between two or more sides of a debated issue).


They might have thought they could hold the world economy hostage to force other counties to act. Or the whole wanting things to get better is an act and the economic disruption is the whole point.


It helps in the sense of once you’ve looked at code enough times, you can stop really seeing it. So many times I’ve debugged issues where I looked many times at an error that is obvious in hindsight but I just couldn’t see it before that. And that’s in cases where I knew there was an issue somewhere in the code.
Or for optimization advice, if you have a good idea of how efficiency works, it’s usually not difficult to filter the ideas it gives you into “worthwhile”, “worth investigating”, “probably won’t help anything”, and “will make things worse”.
It’s like a brainstorming buddy. And just like with your own ideas, you need to evaluate them or at least remember to test to see if it actually does work better than what was there before.


Though on that note, I don’t think having an LLM review your code is useless, but if it’s code that you care about, read the response and think about it to see if you agree. Sometimes it has useful pointers, sometimes it is full of shit.


Yeah, they don’t do analysis but can fool people because they can regurgitate someone else’s analysis from their training data.
If could just be matching a pattern like “I have a network problem with <symptoms>. Your issue is <problem> and you need to <solution>.” Where the problem and solution are related to each other but the problem isn’t related to the symptoms, because the correlation with “network problem” ends up being stronger than the correlation with the description of the symptoms.
And that specific problem could likely be solved just by adding a description of that process to the training data. But there will be endless different versions of it that won’t be fixed by that bandaid.


Oh if the alliance is non-nato then it means he might honour it or something?


You know what’s going on inside the large companies that are hoping to cash in on the AI thing? All workers are being pushed to use AI and goals are set that targets x% of all code written be AI-generated.
And AI agents are deceptively bad at what they do. They are like the djinn: they will grant the word of your request but not the spirit. Eg they love to use helper functions but won’t necessarily reuse helper functions instead of writing new copies each time it needs one.
Here’s a test that will show that, with all the fancy advancements they’ve made, they are still just advanced text predictors: pick a task and have an AI start that task and then develop it over several prompts, test and debug it (debug via LLM still). Now ask the LLM to analyse the code it just generated. It will have a lot of notes.
An entity using intelligence would use the same approach to write the code as it does to analyze it. Not so for an LLM, which is just predicting tokens with a giant context window. There is no thought pattern behind it, even when it predicts a “thinking process” before it can act. It just fits your prompt into the best fit out of all the public git depots it was trained on, from commit notes and diffs, bug reports and discussions, stack exchange exchanges, and the like, which I’d argue is all biased towards amateur and beginner programming rather than expert-level. Plus it includes other AI-generated code now.
So yeah, MS did introduce bugs in the past, even some pretty big ones (it was my original reason for holding back on updates, at least until the enshitification really kicked in), but now they are pushing what is pretty much a subtle bug generator on the whole company so it’s going to get worse, but admitting it has fundamental problems will pop the AI bubble, so instead they keep trying to fix it with bandaids in the hopes that it’ll run out of problems before people decide to stop feeding it money (which still isn’t enough, but at least there is revenue).
Most of the AI industry is currently stuck in a kind of uncanny valley where it’s close enough to fool people who don’t care about details or who so desperately want to make money that they deny the reality that these AIs aren’t actually good at very much.
But there’s been so much money invested in it that they are desperate to make it generate some revenue and profit and keep shoving it into things, hoping that their thing will be the one the public finally latches on to.
It’s also management types that really bought in to it. The kind of managers that don’t know shit and will make impossible requests, or think something simple is hard and something hard is simple because they don’t actually know much about the jobs they are managing. But they do have the power to direct those under them to use the AIs as well as get of or dismiss the opinions of those pointing out the emperor has no clothes.
Right now, they are hoping to find that substance that will keep the AI bubble from popping. But IMO the problem is fundamental to the big data approach to AI of “throw a ton of data at a generic correlation engine and hope that it ends up smart”.