• UlrikHD@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    A round number is a number that is the product of a considerable number of comparatively small factors (Hardy 1999, p. 48). Round numbers are very rare. As Hardy (1999, p. 48) notes, “Half the numbers are divisible by 2, one-third by 3, one-sixth by both 2 and 3, and so on. Surely, then we may expect most numbers to have a large number of factors. But the facts seem to show the opposite.”

    A positive integer n is sometimes said to be round (or “square root-smooth”) if it has no prime factors greater than sqrt(n). The first few such numbers are 1, 4, 8, 9, 12, 16, 18, 24, 25, 27, 30, 32, … (OEIS A048098). Using this definition, an asymptotic formula for the number of round integers less than or equal to a positive real number x is given by N(x)∼(1-ln2)x+O(x/lnx)

    (Hildebrand).

    https://mathworld.wolfram.com/RoundNumber.html

    Alternatively, a number rounded off to a given precision in whatever numeral system you are using. E.g. ten in decimal may be round if you are dealing with small numbers in the decimal system, but it wouldn’t be particularly round if you were dealing with large numbers or hexadecimal.

  • jadero@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    In elementary school, I learned that the round numbers ended with 0. As I progressed, I came to realize that this was equivalent to saying that round numbers are integer-multiples of 10.

    Now that you’re asking the question, I would generalize that, so that round numbers are multiples of the base.

    In binary (converted to decimal), that would be 2, 4, 6, 8, …

    In octal (converted to decimal)l, that would be 8, 16, 24, 32, …

    … and so on.

    I also have no problem with negative round numbers.

    It strikes me that 0 seems to be a canonical round number in that it’s a round number regardless of base.

    I wouldn’t object if you were to say that round numbers are integer powers of the base (10, 100, 1000, … for decimal). If your definition doesn’t include 0, then I’ll expect a good explanation for why not.

    But, truth be told, I could learn to live with any definition I can wrap my head around, as long as I can use my elementary school definition in polite company. :)

  • atheken@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It really depends on context, but if I’m just talking about estimating something, it’s usually rounding a decimal to a whole number or if it’s already a whole number, rounding it to the closest value that is divisible by 5 or 10.

    Other than that, it’s basically just about reducing significant figures to make doing rough estimates more easily.