That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.
Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.
If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.
I’ve never seen a calculator being wrong, and I’m genuinely curious what you’re talking about.
That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.
Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.
If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.
Now, that’s a fine hair to be splitting.
Only when people use the wrong input, garbage in and garbage out.
In the same vein I can’t think of any instance where excel had calculated things wrong unless there was a fault in the formula that I made.
Except if you’re calculating dates from a long time ago. It famously takes some liberties with leap years.
https://devblogs.microsoft.com/oldnewthing/20160628-00/?p=93765
https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug