Due to the high focus on parallelisation by modern processor manufacturers, it's extremely important to make sure that your build system supports parallel building, testing and install, especially for the sake of distributions such as Gentoo Linux and FreeBSD ports, which build software from source.
While the default rules for automake are properly designed to allow for the highest level of parallelisation, there are a few important details that have to be considered to make your build system properly parallelisable.
The first rule of thumb is to make use of the non-recursive
features discussed in Section 2, “Non-recursive Automake”. Since make can only run rules in parallel
that are in the same directory, while directories are built
serially, by moving everything in a single
Makefile.am
you can run everything in
parallel.
Parallelising the make install process is something that is often overlooked simply because it's an I/O-bound task, rather than a CPU-bound one. Unfortunately, in some cases, libtool will have to perform again the linking on libraries, if the destination folders don't match those used during build, for whatever reason. Since linking is a CPU-bound task, running the install phase in parallel can save you time on multi-core systems.
There are very few issues that you need to consider when dealing
with parallel install, as the only tricky part is handling of
custom install targets, such as
install-exec-local
. It's common when writing
these targets, to assume that the target directory has already
been created. This would be correct both when the targets are
executed in series (as the local targets are executed after the
main ones by default) and when not using the
DESTDIR
(as most of the time the directory is
already present on the live filesystem).
Example 2.5. Common case of broken install-exec-local target (directory assumed to be present)
bin_PROGRAMS = multicall install-exec-local: cd $(DESTDIR)/$(bindir) && \ $(LN_S) multicall command1 && \ $(LN_S) multicall command2
In this case, the multicall
executable
changes its behaviour depending on the name it has been called
as. The build system intends to create multiple symlinks for
it during install, but the first call to
cd is likely going to fail during a
parallel make install execution.
There is only one real way to solve these situations, and that is making sure that the directory exists before proceeding; a common mistake in this situation is to test whether the directory exists, and then calling mkdir to create it. This will also fail, if by reason of parallel execution, the directory is created after the test, but before mkdir.
Example 2.6. Common case of broken install-exec-local target (directory created on a race condition)
bin_PROGRAMS = multicall install-exec-local: test -d $(DESTDIR)/$(bindir) || mkdir $(DESTDIR)/$(bindir) cd $(DESTDIR)/$(bindir) && \ $(LN_S) multicall command1 && \ $(LN_S) multicall command2
This tries to solve the issue noted in the previous example,
but if the Makefile.am
is complex enough,
parallel targets execution can likely cause
$(bindir)
to be created after the test,
before the mkdir.
All modern mkdir implementations, though,
provide the option -p
which not only creates
the directory's parents, but also will consider it a success if
the directory exists already, contary to its default behaviour.
To make use of mkdir -p, one has to make sure
it is supported by the current operating system;
autoconf provides a simple way to
test for its presence, as well as a replacement script if that
wouldn't be enough, via the macro
AC_PROG_MKDIR_P
. After calling that macro
from you configure.ac
file, you can then
make use of $(MKDIR_P)
to transparently call
the program or the replacement script.
Example 2.7. Correct install-exec-local using
AC_PROG_MKDIR_P
bin_PROGRAMS = multicall install-exec-local: $(MKDIR_P) $(DESTDIR)/$(bindir) cd $(DESTDIR)/$(bindir) && \ $(LN_S) multicall command1 && \ $(LN_S) multicall command2