

And you can decide the thing you want isn’t worth the cost.


And you can decide the thing you want isn’t worth the cost.


It might be saving some portion of young women from ending up in a toxic relationship with someone with the emotional intelligence of a noodle.
Though imagine the rage of these guys realized how many of them are “dating” the same model. Other guys probably even figured out how to prompt them better.


I meant like births, as in even if you can enumerate every single individual, statistics can apply to future members that don’t yet exist.
And yeah, it’s been a while and I remembered that the proof didn’t depend on the population size but forgot that it assumed a large population size in the first place. I was wrong.


Games rely on more than just the OS API and even variation between Linux flavours or installed libraries on the same flavours can make compatibility difficult. My success rate at running games with a Linux native version is maybe 50% before I fall back to proton and the windows version. The consistency helps, though kudos to the developers who put in the effort to get their games working on Linux in general rather than just their particular systems.
The gpu library is a big one. There’s OpenGL, DirectX, and Vulkan (which is the successor to OpenGL) that I know of. Linux and windows support all three, in some form or manner, but afaik mac only supports OpenGL, which really holds back game development, especially with DX being the most popularly targeted one.
Though my info might be a bit dated because I dgaf about macs generally, just wanted to point out that the shared roots between mac and Linux don’t necessarily mean targeting one would make targeting the other easier in a meaningful way.
Maybe one day they’ll sell a dongle to play games (which is really just a live boot linux install).


Then any statistics you measure on that population might be fully accurate for those 100 but might be less able to predict what the next 100 will look like.
You can still measure stats with smaller groups, it just means the confidence interval is smaller. With 300, there’s a 95% chance your test results are close to reality. With 100 it might be more like 66%.


Lmao, so the LLM framework falls back to similar shit to what ALICE used?


So far so good, though we’ll see how my own time smoking and vaping catches up to me in the end.
And then you hear shit like living in a city is as bad for your lungs as smoking from all the car exhaust and figure “well at least I’m back down to just smoking risk instead of 2x smoking risk”. I am (hopefully) relatively healthier than I was when I did smoke, though unfortunately older.
Hope your health reflects your healthy choices well and that you haven’t gotten old in the meantime while seeing how it goes. :)


Lol I assumed OP meant mock as in “You want to go to tiktok? What a horrible site. You have bad taste.”


Yeah, and the flavours helped because it wasn’t as satisfying as cigarettes but you could have fun flavours to at least make it enjoyable and interesting enough to not just want a cigarette instead (or in addition).


Relatively safe != safe, it just means more safe than the thing being compared to. It’s likely still a big improvement over smoking.


My point isn’t AI is good or bad, but that the difference is how much it gets leaned on.
In this case, it’s how AI is (assumed to be) used at MS vs how it was used in the OP.
MS appears to be heavily leaning on generative AI producing code. In my own experience, that is pretty good these days at responding to a prompt with a series of actions that achieves the desire of the prompt, but is bad at creating an overall cohesion between prompts. It’s like it’s pretty good at making lego blocks but if you try putting it all together, it looks like you built something from 50 different sets, plus the connections between the blocks are flawed enough that it’s liable to collapse the more you put it together.
In the OP, AI is being used to submit bug reports. This one can be thought of as using an AI to write a book report instead of using an AI to write the book in the first place. If the AI writes a shitty report, it has zero effect on the book itself. But the AI might just include a list of all the typos in its report, which is useful for correcting the errors in the book.
Also, game studios forgetting to replace placeholders is yet another issue more on the process itself, though it can also show a lack of attention to detail and maybe indicate that an AI was handling more of the process. A decent system would flag all assets for whether they are placeholders or final and then include a review of all flags before publishing to catch something like this.
So this isn’t a general defense of using AI, I’m just saying that it’s possible to use it without everything it touches turning to slop, but that it often isn’t used like that, resulting in slop.
And it’ll be easy to fall into the slop trap, what with how it’s always making leaps and bounds inprovements that help with instances of it fucking up but don’t resolve the fundamental issues that will probably mean LLMs will always produce some sort of slop (because everything boils down to some sort of word association, just with a massive set of conditional probabilities encoded into it that gives it the illusion of understanding).


The same reason any personal projects (and not using it to diminish what linux projects are but to say that the people working on them do it because they want the project to progress, not because of any financial incentive) can do better then commercial projects: where the passion is at.
Someone just looking to get paid is more likely to say “ok this is good enough” and move on to the next thing. They are more likely to have managers breathing down their necks to get something done by some arbitrary deadline, too.
It’s why indie games have been able to compete with AAA games. The latter are following a formula to get paid, plus are more willing to make compromises in the name of either saving costs or increasing revenue. The former just want to make their fun idea reality.
Also, MS has invested a ton of money into AI and seem to be getting desperate for a return on that. Which means there’s a certain amount of denial about the quality. It’s not just a tool to them, but a tool they desperately need to work and prove it’s worth throwing a ton of money at.
But for anyone that it’s simply a tool for, it can be useful. They are great rubber duckies. Like my last interaction with one was a case where it did horribly and was completely wrong about what “we were discussing”, but I still got to the right conclusion despite it because going through the conversation helped me think it through.
And though it makes a lot of mistakes, its feedback isn’t always wrong. The fact that it can rehash previous things from its history means its good at spotting new instances of problema that have already been solved. So accepting bug reports should be fine, just with the understanding that they each need to be looked at and some reports will need to be rejected because they are wrong.


Sorry aliens but you’ll need to go back to your science labs because we have since discovered how to compact discs themselves. No more data discs the size of records, we can fit an entire 70 minutes worth of full fidelity (to our ears) digital audio and then surpassed even that and managed to get it up to 74 minutes! 700 times 2 to the power of 23 bits of arbitrary data (or maybe it’s just 700 times 8,000,000, we never did figure out the concept of honestly describing things marketers want to sell), all within our outstretched fingers or around a single extended finger.


It’s about 300 samples for an estimate of the distribution with a 95% confidence iirc. That’s assuming the samples are representative (unbiased) and 95% confidence doesn’t mean it’s within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there’s no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).


Figured someone else would give you a serious answer, but since no one has, check out the movie Napoleon Dynamite. It’s a great ackward slice of life that’s really hard to describe but hilarious.


Though what if honey bees are only so docile because they don’t have the energy to be assholes and this is the first step in a total bee world takeover?


I heard that if you vote for him, all you wildest dreams will come true.


I believe it’s modified to give more room for his stub to operate the trigger. Or maybe he can fit a bit of it in there without modification.


Guessing they are getting complaints from their AI trainer clients. And guessing the AI trainer clients are using “training on AI output” as an excuse to avoid admitting that there are fundamental issues that won’t be solved behind the “often makes shit up” problem.
I’ll cross that bridge when I get there. YouTube is already near the bottom of my list for source of information and I already have more sources of entertainment than I have time, so if they manage to win their war on ad blockers, I have no problem never visiting the site again.