What makes a good build
I’ve seen a number of different approaches to building programs, and between writing about the specifications of an Android firmware build machine, and how to create a cost-effective build farm, I thought it’d be worth covering a few things that I believe are important when trying to create a scalable build for your software.
What is a “build”?
When I talk about a build I mean the instructions that tell a machine how to take your source code and create an executable program. Not all languages need a build; Python, for example, allows you to write your program in a file and run it, but many languages (Java, Kotlin, C, Swift, etc.) need a set of instructions to convert the source code from a human-readable form into a machine form that users can then run.
Three key things
There are three things I look for in a “good” build; They should be Reliable, Reproducible, and Rigorously defined. With these three things you can reduce the amount of time developers spend waiting for a build to complete, and take advantage of some useful build tool features.
Builds have to be reliable. Having a build which fails occasionally is like having a messaging app which doesn’t always send your messages; It’s just not doing its job.
One of the easiest ways of improving reliability is removing complexity. If your build system is distributing config files, generating files on a conditional basis, or anything else which is not focused on taking the source code and making a program from it, you’re in a situation where you’re using a hammer as a screw-driver, and while that can work, it can also easily go wrong.
Build tools like Bazel and Buck use a simplified build language which makes it hard to do things which shouldn’t be part of the build. The Buck team have a great summary on why they moved from Python to Skylark after some first-hand experience of how using a more feature-rich language contributed to a higher maintenance overhead.
Tools like Gradle take a different approach and let developers write anything they want in a supported language (Groovy or Kotlin in the case of Gradle). This can give the appearance of make developers more productive if you’re only focused on how much code they’re churning out, but it can also create a long-term maintenance cost as the platform and libraries that the build relies on are upgraded with bug fixes or support for new features (as the Buck team found out). I’m not saying that Gradle is a bad tool, but it’s easy to use a good tool in a bad way when there’s little stopping you from doing so.
It’s also easy to create unnecessary complexity, and reduce the chances of getting a reliable build, by over-using plugins and extensions. There are cases where plugins absolutely add value and are the right thing to use, but using them everywhere, even when they’re not needed, can leave you susceptible to build bugs which only occur as your build gets larger.
Many build tools offer features to help with testing because there are several features they can offer to make running tests quick and effective, but this doesn’t mean that you should write a complex test harness in your favourite build tools language. Build tool developers have identified that they can add a lot of value to the testing process and so provide useful features to accommodate it, but lots of older build tools (e.g. make) have no special features related to testing your code. The problems around creating fast effective test runs are one of the very limited situations where a build tool can add value, and just because your build tool will also run tests, that doesn’t mean it should also be configuring your IDE.
Most folk feel good after solving a complex problem, but when it comes to solving problems which are only loosely related to translating source code into an executable, and you’ve ended up with a solution that only uses a build tool, I’d recommend thinking about the chorus of one of my favourite songs; Hedonism, by Skunk Anansie: “Just because you feel good, it doesn’t make it right."
You shouldn’t have to perform every step of your build every time you build.
One of the most common, and most effective, ways of speeding up any computational task is by introducing a cache. In a build this allows you to go from source code to a more machine-friendly representation once and then use the output of that process in future builds, but for it to work well your build needs to be reproducible. Caches are awesome, but they come with a problem; They need to know what changes will make a cache entry invalid to ensure they don’t supply old, incorrect data, and that’s where issues with your build can come in.
It’s easy to end up with an unreproducible, uncacheable build; Using things like the current date and time, using plugins which produce inconsistently ordered output (e.g. a zip file where the filenames are in a random order), or relying on the order in which filenames are supplied by the operating system being consistent, can all cause the output of a part of your build to change even though your source code hasn’t. If this happens you should ask yourself if you can sort some data to give a predictable ordering, or use something fixed, like a commit ID, instead of the current time and date, so that there are fewer (or ideally zero) things changing when you build the same source code multiple times.
When a build isn’t reproducible you end up with one of two situations; You end up with fast build where the output may not be correct because the cache isn’t invaliding entries when things have changed (e.g. the current time or date), or you end up with a slower, but correct, build because parts of the build can’t be cached.
Wherever you can you should remove anything which changes on a per-build basis, that way a lot of your build can be cached, your builds will be fast, and the output will accurately reflect what would have been created if the cache didn’t exist.
Your build system should define everything it needs, but not everything it might need.
Build tools can do a lot of work for you, but they need you to tell them about relationships between the code. If you’re not giving them an accurate representation of how parts of code relate to each other they’ll probably perform unnecessary work in your build (which will slow it down), and you may not be able to make the most of some useful features they have.
To give you an example, if we’ve got a build which has three components;
C, and we know that
A uses APIs from
C then we should create a representation which says;
A -(depends on)-> B -(depends on)-> C
If we then make a change to the source code so
A now also uses APIs from
C we should update
our representation to show both dependency chains;
A -(depends on)-> B -(depends on)-> C
A -(depends on)-> C
You might be wondering why we need to do this; You might think if
B depends on
A will have
C available to it, and you’d
be correct, but this is what’s known as a transitive dependency,
and transitive dependencies make it difficult for you to get the most out of your build system.
Advanced build tools include the ability to run queries against the dependencies
you’ve defined, which allows you to run a minimal set of tests to verify changes
don’t break anything. If we look at
Bazels' reverse dependencies query
feature, and look at our two graphs above, you should be able to see how, with the
two chain representation, when you update
C Bazel can identify that both
B depend on it, and so the tests in
B need to be run to identify
any breaking changes. In the first, single line, representation that relies on a
transitive dependency between
C, Bazel has no way of
A does or does not use APIs from
C without knowing the languages
C and performing static analysis on the code.
So by removing transitive dependencies you can create a test system which is fast because it can accurately identify the blast-radius of any change and limit the test run to only code which is likely to be affected by a change.
Similarly, you don’t want to define dependency relationships where they’re not
needed. If you added a module
D that is only used by
B you should only add a
rule to show
B -(depends on) -> D, and not add rules saying that
use depend on
D because, if
D changes, you want your build tool to correctly
B as the only thing affected by it.
The same is true for plug-ins and extensions; You should only define these where they’re needed. Using plug-ins and extensions on code where they’re not needed will slow down your build as well as potentially impacting the reliability of the build because it has become unnecessarily complex.
Some folk might consider that including dependencies, plug-ins, and extensions everywhere makes it easier for folk to write code without having to worry about the build system. This, again, is falling into the trap of not considering long-term maintenance; You need to think about what happens when the number of modules you have doubles; Will your build continue to slow down running unnecessary plugins? Will CI slow down because it can’t determine the blast-radius for a change accurately? Will you hit bugs due to the plug-in not being tested at the scale you’re trying to operate at? I’ve seen all of these happen in real builds, and the easiest way to avoid them is by rigorously defining your build. The book “Software Engineering at Google” has a good way to think about writing your build representation; “Code is read far more than it is written”, so optimizing your build representation to maximise code writing speed rather than optimizing for the people and tools that may read it tens, hundreds, or thousands of times per day, is usually the wrong approach.
That’s all folks
Hopefully this has given you some food for thought about your builds. The Android Open Source Project gets these mostly right, which is why we can build a build farm for it that doesn’t need lots of really expensive machines to test each change. Your project is probably a lot smaller that the AOSP, but, if you keep these principals in mind, you can set your build up so as your project grows you can maintain your speed of development by making the most of the features of your build tool.