On Fri, Sep 22, 2023 at 10:00:56AM +0200, Страхиња Радић wrote:
> How does it decide when rebuilding is needed? Does it track dependencies and
> how?
IMO in small projects, these are problems that should be *avoided entirely*
instead of creating them and then solving it. E.g you can have a simple
unity build [0] that builds everything in a single translation-unit via
a single compiler command:
$ cc -o exe src.c
This requires:
- No build system whatsoever.
- No dependency tracking.
- No need to run `make clean` or similar when changing compiler flags
(e.g adding Sanitizers, testing with more aggressive warnings or
optimization flags etc).
- Usually faster full builds (even compared to parallel make) due to not
having to re-parse the same system headers again and again along with
avoiding other fixed costs.
- Nearly as fast as incremental builds or at least not noticeably slower
that would hamper developer productivity (reminder: for small projects).
- Better warnings due to compiler (and static analyzers) being able to
look across TU boundaries. [1]
- Simple build makes it easier to use other tools on the project such as
static analyzers or fuzzers.
Some tend to argue that this "doesn't scale", but as I said, this is for
small projects. And the chances of your small project turning into the
next linux kernel [2] with 30M LoC is probably not high. So don't create
new *actual problems* by trying to solve an imaginary one.
[0]:
https://en.wikipedia.org/wiki/Unity_build
[1]: you can add LTO to overcome this. But then your incremental builds
will usually become _significantly slower_ because LTO (as of now at
least) makes the linking process extremely slow.
[2]: *even in* large projects, unity builds are still useful to speed up
compilation. For example the infamous linux kernel "fast header" patches
included reducing TUs by, "consolidating .c files":
https://lore.kernel.org/lkml/YdIfz+LMewetSaEB_AT_gmail.com/T/#u
- NRK
Received on Fri Sep 22 2023 - 11:09:13 CEST