• 0 Posts
  • 257 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Lol strong “I don’t want buyer’s regret” energy from this guy. Or maybe “I am way out of my league when evaluating how good something is” with perhaps a dash of “boots are delicious”.

    Like he literally mentions that he can hear water sloshing around in the frame somewhere but then immediately concludes that it’ll probably go away on its own sometime in the future. I had a period in my life when I was like that. I consider it my “I had no fucking idea how naive I was or how things worked or how to take care of them” phase, and I was the last person anyone should have taken advice about anything from.



  • No, the exact % depends on how stable everything else is.

    Like a trivial example, if you have 3 programs, one that sets a pointer to a random address and tries to dereference it, one that does this but only if the last two digits of a timer it checks are “69”, and one that never sets a pointer to an invalid address, based on the programs themselves, the first one will crash almost all the time, the second one will crash about 1% of the time, and the third one won’t crash at all.

    If you had a mechanism to perfectly detect bit flips (honestly, that part has me the most curious about the OP), and you ran each program until you had detected 5 bit flip crashes (let’s say they happen 1 out of each 10k runs), then the first program will have something like a 0.01% chance of any given crash being due to bit flip, about 1% for the 2nd one, and 100% for the 3rd one (assuming no other issues like OS stability causing other crashes).

    Going with those numbers I made up, every 10k “runs”, you’d see 1 crash from bit flips and 9 crashes from other reasons. Or for every crash report they receive, 1 of 10 are bit flips, and 9 of 10 are “other”. Well, more accurately, 1 of 20 for bit flip and 19 of 20 for other, due to the assumption that the detector only detects half of them, because they actually only measured 5%.






  • Going for less known names can also help, as they are trying to build/maintain a reputation in addition to sales.

    IKEA is an interesting brand because it spans from incredibly cheap to nice quality, and personally, I find the cheapness is more in the material selection than the design. Like the furniture I got from them at my last place all survived the move to my current place, even the one I got frustrated with and stopped caring if it made it when taking it apart, it still stands solid today. They are one of the few that has decent value, though their prices can get pretty high at the high end.


  • Yeah, it’s more of a late stage capitalism “luxury” where the difference isn’t so much in the quality as in the price because people conflate “price” with “quality” and “desireability”.

    And I do understand it, at least to a degree. I try to do research on more expensive items or ones I’m looking for quality in, but it’s kinda exhausting, and often a cycle of “I want thing, see it in store and remember I want it, look at options, no idea which (if any) are decent and which suck, start looking online, decide I don’t want to do this right now, move on, forget to do research, repeat next time I’m at that store”.

    The easy mode of doing that would be look at options, assume cheapest ones suck, most expensive is too much, get one of the ones a little cheaper. At which point, the seller just needs to set a higher price to get a sale on the crappy ones.




  • If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn’t trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).

    You’ll probably get the desired output if you’re using one of the good models.

    Now ask it to review the code or optimize it.

    If it was a good coding AI, this step shouldn’t involve much, as it would have been applying the same reasoning during the code writing process.

    But in my experience, this isn’t what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from “write code” to “optimize code”, which affects the correlations involved. There is no “write optimal code” because it’s trained on everything and the kitchen sink, so you’ll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it’s presented in a “discovery” format where a prof intended to talk about why this slide is bad but didn’t include that on the slide itself).








  • An alternative that will avoid the user agent trick is to curl | cat, which just prints the result of the first command to the console. curl >> filename.sh will write it to a script file that you can review and then mark executable and run if you deem it safe, which is safer than doing a curl | cat followed by a curl | bash (because it’s still possible for the 2nd curl to return a different set of commands).

    You can control the user agent with curl and spoof a browser’s user agent for one fetch, then a second fetch using the normal curl user agent and compare the results to detect malicious urls in an automated way.

    A command line analyzer tool would be nice for people who aren’t as familiar with the commands (and to defeat obfuscation) and arguments, though I believe the problem is NP, so it won’t likely ever be completely foolproof. Though maybe it can be if it is run in a sandbox to see what it does instead of just analyzed.