I mean, sure, that’s probably heavily influenced by the need for bundling for the frontend.
But it isn’t done blindly. Bundlers reduce the overall size of the code, either due to minification or tree-shaking (removing unused modules). It also removes the filesystem overhead of resolving and opening other modules.
Would bundling be useful in other interpreted languages?
I suppose you may count JVM’s compilation to bytecode as being very similar.
Lots of people consider it an advantage, that interpreted languages don’t require a separate compilation/bundling step. Most shell scripts or Python scripts, you can just grab the code for and run them.
But yeah, for anything more complex, you need libraries and you likely want to distribute additional files, like documentation or web content files, or you may even want to just hand out the runtime to the user, so that you don’t run into breaking changes between Python versions, for example. For all of these, a bundling step in some form is necessary.
You can also, for example, pre-compile Python, to try to mitigate its slow execution speed…
A lot of bundling in the JS world is also either because of TypeScript, or transpiling to old JS so that it’s more compatible with old node / browser. JS has gone through quite drastic changes in syntax, from vars and prototypes to now let/const, ESM imports, classes, Promises, async/await. Lot of it which may run in an old browser. It also helps runtime speed, slightly, but it’s not something that matters all that much on a server because you just wait a second or two for it to load.
JS is also kind of wild with how many libraries a given project may pull in, and how many minuscule files those tend to use, especially since each library also get their own versions of every dependencies too.
Python uses much fewer libraries and has code cache. PHP has code caching and preloading built-in so filesystem accesses are reduced. Bash usually doesn’t grow that big. Ruby probably just accepts a second or to two to load up for the simplicity of the developer experience. Typically there’s one fairly large framework library and a few plugins and utilities, whereas a big Next.js project will pull in hundreds of libraries and tools in.
A JS solution to a JS problem really. It needs to run in potentially ancient browsers, so we just make a giant JS file. For the other languages, you can pretty much just add it right to the runtime. If bundling was that big of a deal we’d read libraries right off a zip file like Java does with its jar files by default.
Plus, if you really care, you can turn on filesystem compression on your project directory and get the same benefits as
The size of the code is mostly irrelevant if you’re not shipping it to clients over the network on every request. Short of truly gargantuan statically-linked binaries in compiled languages, anyway, and bundling isn’t really an applicable concept there. And similarly, the overhead of loading modules from the filesystem is a one-time cost that’s mostly irrelevant for server-side code that runs for days or weeks or years at a time.
On the other hand, the complexity overhead of adding the additional bundling step is a major drag on development productivity, debuggability, etc.
The size of the code still impacts your deployment. Moreover, if you’re using something like AWS lambda, small changes can have significant influence on cold start time.
I agree that an extra step is not desirable, but this would only be done for production deployments (and consequently pre-prod if you do that).
Lambda is certainly an interesting case for this, I’ll give you that. Outside of that, though, the impact on deployment speed is also not relevant; the bottlenecks for deployment are things like CI, canarying, even rolling blackout windows across AZs, etc. The actual time spent transmitting your build artifact over the network is completely negligible even at huge sizes
deleted by creator
Commercial projects that use interpreted languages often do this as part of their code obfuscation process. I hate it because it made modification and understanding what’s happening under the hood harder.
Ideally you’d only do this for live deployments (production and possibly pre-production or staging / QA). For all other testing, you would keep it unbundled.
I guess it boils down to if bundling can improve execution speed. On the web it would make page loads quicker.
In nodejs, at least, it does. The minification and tree-shaking can make code significantly smaller. This can mean smaller cold start time in AWS Lambda for example, or just overall a little less RAM. If your heap isn’t that large, that can be noticeable.
It also eliminates the filesystem overhead of resolving and loading modules.
-
it can increase execution speed because the alternative is usually hosting a file separately and making a separate http request to get that file. Bundling allows you to deliver one bigger file instead of multiple small files, saving on network requests.
-
tree-shaking, minifcation, g-zipping - to get the best performance you want all of your code and only your code. also gzip(filepart1 + filepart2) < gzip(filepart1) + gzip(filepart2) + …
But I think the biggest factor is
- In JS it’s very common that you don’t have control over where your code is built or executed. With bundling you can usually specify a build target so you can write your code in modern JS with the guarantee that the output will be usable on older systems.
Most of the time if your code doesn’t work because of an old environment the end user will just update their environment but with most JS being written for the web it was beneficial if not required that you put in the effort to make your code run on older environments because your target user base might not have the permissions or technical abilities to upgrade the “environment” (aka browser).
-