This ain’t the '90s anymore; CPUs have been fast for a while now. If your mid-development partial compilation time isn’t basically negligible with the CPUs we already have, your build script is probably fucked up or the module you’re working on is way too large. You should rarely be working on something with such cross-cutting concerns that you legitimately need to recompile vast swathes of the codebase at once.
I think it depends on what you’re working on. If you’re working on some JavaScript web app you could say that CPUs are “good enough”. But even then larger more complicated apps will get annoyingly slow to “compile”.
It’s when you are working with larger and more complicated Rust or C or whatever code bases that compile time matters.
This all being said for me CPU important is a good thing. It was good in the 90s and it’s good now.
But even then larger more complicated apps will get annoyingly slow to “compile”.
It’s when you are working with larger and more complicated Rust or C or whatever code bases that compile time matters.
If you are in the middle of doing a unit of work, iteratively making small changes to the code, compiling, and testing them, those compile times should be small too. If a small change in one file triggers your entire project to recompile, you fucked up the Makefile or structured the whole program poorly or something like that. There’s something wrong that a faster CPU will only mask, not fix.
C’mon, faster compilation never hurts. It’s not just build scripts - think of development where it eats plenty of seconds each time you start debugging.
In most cases, you should only need to recompile the particular file you’re working on because interfaces should be changing a lot less frequently than implementations.
Any single file should not be so large it takes a long time to compile by itself.
If other files are getting recompiled anyway even though nothing about them actually changed, the dependency resolution in your Makefile (or whatever) is screwed up and you need to fix it.
Point is, routine long compilation times after a small change are a code smell. There’s something wrong that a faster CPU will only mask, not fix.
you can often just slap compiler cache on a project and get a 20-150x speedup, but when the original compile time was 45 minutes, it’s still slow enough to disrupt your workflow (though, I suspect you may be talking about some manual method that may be even faster. But are those really common enough where you would call the lack of it a code smell?)
This ain’t the '90s anymore; CPUs have been fast for a while now. If your mid-development partial compilation time isn’t basically negligible with the CPUs we already have, your build script is probably fucked up or the module you’re working on is way too large. You should rarely be working on something with such cross-cutting concerns that you legitimately need to recompile vast swathes of the codebase at once.
I think it depends on what you’re working on. If you’re working on some JavaScript web app you could say that CPUs are “good enough”. But even then larger more complicated apps will get annoyingly slow to “compile”.
It’s when you are working with larger and more complicated Rust or C or whatever code bases that compile time matters.
This all being said for me CPU important is a good thing. It was good in the 90s and it’s good now.
If you are in the middle of doing a unit of work, iteratively making small changes to the code, compiling, and testing them, those compile times should be small too. If a small change in one file triggers your entire project to recompile, you fucked up the Makefile or structured the whole program poorly or something like that. There’s something wrong that a faster CPU will only mask, not fix.
C’mon, faster compilation never hurts. It’s not just build scripts - think of development where it eats plenty of seconds each time you start debugging.
I am thinking of development.
In most cases, you should only need to recompile the particular file you’re working on because interfaces should be changing a lot less frequently than implementations.
Any single file should not be so large it takes a long time to compile by itself.
If other files are getting recompiled anyway even though nothing about them actually changed, the dependency resolution in your Makefile (or whatever) is screwed up and you need to fix it.
Point is, routine long compilation times after a small change are a code smell. There’s something wrong that a faster CPU will only mask, not fix.
you can often just slap compiler cache on a project and get a 20-150x speedup, but when the original compile time was 45 minutes, it’s still slow enough to disrupt your workflow (though, I suspect you may be talking about some manual method that may be even faster. But are those really common enough where you would call the lack of it a code smell?)
Yep sure mate, have a go at an enterprise scale mobile app or. Net and then let’s chat