Your language isn’t broken, it’s doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That’s why, more often than not, .1 + .2 != .3.
You must log in or register to comment.
Why downvote? This is an often overlooked trap for programmers… especially those of the “data science” variety, but certainly not restricted to that subset.
Exactly, well said.