But all the downsides you mention are inherent to the problem, not to adding dimensions.
managing scales
How do you add a nanometer and a meter without units? You need to make the choice loss of precision vs convert to nanometer anyway. Types just give you reassurance that you did the right thing (at opposed to, say convert m to gigameter instead of nanometer, because you forgot a minus sign for the conversion).
Constants
A good unit system has the constants saved. I don’t need to look up the dimensions of hbar in eV, I just do units.hbar and get the thing I mean, not the number that implicitly has a J or eV next to it. And if you have a constant that is not in the units library, you only need to define it once. This really isn’t the problem you make it seem.
Logarithms
I am very curious where you take logarithms of measures with dimensions and why you cannot normalize to e.g. l/1m // length in meters before taking the log. A unit system doesn’t prevent you from doing this, it just makes it explicit what you implicitly did anyway (but didn’t tell anyone else, because it’s implicit)
Performance
This is a fair point and I will grant you that. If you do large scale simulations you need performance more than anything. But most math you do in science is a short script that I don’t want to pour an hour into to make sure I didn’t fuck up the units. I want to write the script, have the computer check the units and then take a break while the computer takes an hour to compute the result.
Units and namespaces
Not a big problem in my experience. The vast, vast majority of variables are derived. You don’t need to write v = 3 * meters / second, you have distance = 100 * meters and time = 33 * seconds somewhere in your code anyway, so you only need v = distance/time. The assignment to v is identical, if you have units or not. You only need to define the input, and only do it once.
normalized form
The units library simply allows you to choose. print(fuel_efficiency.in(1 * liter / (100 * kilometer))) // 5 l/100km or print(fuel_efficiency.to_si_base_units()) // 5e-8 m**2
I have written code of the form xxxx_in_meV, yyyy_in_per_cm_cubed, etc before. It’s much worse than a proper unit system library.
Because if you don’t use a library you may be able to use a number for your constants - but you have to find out the value of your constants in some weird jumble of dimensions. It’s the difference between target_efficiency = 5 * liter / (100 * km) and target_in_metersquared = 5e-8// 5 l/100km, converted to base units
Taking user input is much easier too. Just do units.parse(user_input), and the user is free to give um or nm or Å. No need for a prominent tip in the ui “input must be given in um!”.
All that being said, a new language is not what I am looking for. I use python sympy (though it’s not very ergonomic to use) for proper script programs and insect.sh if I need to convert something quickly.
EDIT: insect.sh tells its users to use numbat now, hahaha! numbat.dev has exactly the same UX though, so I’ll just recommend that now for those quick physics calculations. It really is an invaluable tool to have.
My point is that it’s mostly useless to use a language that supports these kind of things, because the proper programming practice is to normalise and treat the edge cases at the interface. Once you are inside your own codebase, you use SI at the scale that makes sense and that’s it. No more ambiguity, no more need to carry the unit around. The unit is implicit and standardised throughout your code, and you don’t have to carry around dead weight (in memory and computation) for nothing.
When something is enforced on type level it doesn’t require your memory and usually doesn’t require computation.
As of lately I came to think that being explicit is mostly better than being expressive. So in this case stating all the units might work better than having a concise progtam.
But all the downsides you mention are inherent to the problem, not to adding dimensions.
How do you add a nanometer and a meter without units? You need to make the choice loss of precision vs convert to nanometer anyway. Types just give you reassurance that you did the right thing (at opposed to, say convert m to gigameter instead of nanometer, because you forgot a minus sign for the conversion).
A good unit system has the constants saved. I don’t need to look up the dimensions of hbar in eV, I just do units.hbar and get the thing I mean, not the number that implicitly has a J or eV next to it. And if you have a constant that is not in the units library, you only need to define it once. This really isn’t the problem you make it seem.
I am very curious where you take logarithms of measures with dimensions and why you cannot normalize to e.g.
l/1m // length in meters
before taking the log. A unit system doesn’t prevent you from doing this, it just makes it explicit what you implicitly did anyway (but didn’t tell anyone else, because it’s implicit)This is a fair point and I will grant you that. If you do large scale simulations you need performance more than anything. But most math you do in science is a short script that I don’t want to pour an hour into to make sure I didn’t fuck up the units. I want to write the script, have the computer check the units and then take a break while the computer takes an hour to compute the result.
Not a big problem in my experience. The vast, vast majority of variables are derived. You don’t need to write
v = 3 * meters / second
, you havedistance = 100 * meters
andtime = 33 * seconds
somewhere in your code anyway, so you only needv = distance/time
. The assignment tov
is identical, if you have units or not. You only need to define the input, and only do it once.The units library simply allows you to choose.
print(fuel_efficiency.in(1 * liter / (100 * kilometer))) // 5 l/100km
orprint(fuel_efficiency.to_si_base_units()) // 5e-8 m**2
I have written code of the form
xxxx_in_meV
,yyyy_in_per_cm_cubed
, etc before. It’s much worse than a proper unit system library.Because if you don’t use a library you may be able to use a number for your constants - but you have to find out the value of your constants in some weird jumble of dimensions. It’s the difference between
target_efficiency = 5 * liter / (100 * km)
andtarget_in_metersquared = 5e-8 // 5 l/100km, converted to base units
Taking user input is much easier too. Just do
units.parse(user_input)
, and the user is free to give um or nm or Å. No need for a prominent tip in the ui “input must be given in um!”.All that being said, a new language is not what I am looking for. I use python sympy (though it’s not very ergonomic to use) for proper script programs and insect.sh if I need to convert something quickly.
EDIT: insect.sh tells its users to use numbat now, hahaha! numbat.dev has exactly the same UX though, so I’ll just recommend that now for those quick physics calculations. It really is an invaluable tool to have.
My point is that it’s mostly useless to use a language that supports these kind of things, because the proper programming practice is to normalise and treat the edge cases at the interface. Once you are inside your own codebase, you use SI at the scale that makes sense and that’s it. No more ambiguity, no more need to carry the unit around. The unit is implicit and standardised throughout your code, and you don’t have to carry around dead weight (in memory and computation) for nothing.
When something is enforced on type level it doesn’t require your memory and usually doesn’t require computation.
As of lately I came to think that being explicit is mostly better than being expressive. So in this case stating all the units might work better than having a concise progtam.