Skip to main content
Icecc distributed compiler network diagram showing scheduler, worker nodes, and job distribution flow

Distributed C/C++ Compilation with Icecc

Icecc — the icecream distributed compiler — takes compilation jobs from a local build and distributes individual translation units across worker machines on the same network, returning object files transparently to the originating machine's linker. For large C++ codebases, the reduction in wall-clock build time is proportional to the number of available CPU cores across all participating machines, not just the local machine. This guide covers the three-component architecture (scheduler, worker daemons, compiler wrapper), the critical step of packaging and distributing the compiler toolchain environment, monitoring with icemon, and the practical edge cases that cause jobs to silently fall back to local compilation. The context is a development network of Linux machines building a substantial C++ codebase. This page is part of the how-to section; for broader developer tooling context, see the developer notes.


Architecture overview

→ Short Answer

Icecc has three components. The scheduler (icecc-scheduler) runs on one machine and assigns incoming compilation jobs to the least-loaded worker. Each worker runs iceccd, which advertises its available CPU slots to the scheduler and executes assigned jobs inside a chroot. The compiler wrapper on the submitting machine intercepts cc and c++ invocations, sends preprocessed source to the scheduler, receives the compiled object file, and returns it to the build system — which has no visibility into any of this.

The key design constraint is that workers execute the compilation in a chroot environment using the same compiler version as the submitting machine. Icecc handles this by packaging the submitting machine's compiler toolchain into a compressed tarball (createenv) that gets transferred to each worker the first time a job is scheduled there. After initial transfer, the environment is cached on each worker.


Installing and configuring the scheduler

The scheduler is single-instance per network. It does not need to be the fastest machine — it handles only job assignment metadata, not compilation data. Compilation data flows directly between submitting machine and worker.

Debian/Ubuntu — install and start scheduler
sudo apt install icecc
# Start and enable the scheduler:
sudo systemctl enable --now icecc-scheduler
# Verify it's listening:
sudo ss -ulnp | grep 8765

The scheduler listens on UDP port 8765 for worker advertisements. Workers discover the scheduler via UDP broadcast on the local subnet — no manual IP configuration required for a flat LAN. On segmented networks or VLANs, the scheduler IP must be set explicitly in each worker's /etc/icecc/icecc.conf.


Configuring worker daemons

Every machine you want to participate in compilation needs iceccd running. This includes the machine running the scheduler — there's no reason to waste its cores.

Debian/Ubuntu — install and start worker daemon
sudo apt install icecc
# Worker config: set number of max parallel jobs (default: number of CPUs)
echo 'ICECC_MAX_JOBS=8' | sudo tee -a /etc/icecc/icecc.conf
sudo systemctl enable --now iceccd
⚙ Compatibility Note

The ICECC_MAX_JOBS setting should be at or below the machine's CPU thread count. Setting it higher than the core count causes thermal throttling under sustained load without proportional throughput benefit. On machines that also run interactive sessions, setting it to cpu_count - 2 keeps the machine usable during heavy builds.


Packaging the compiler environment

This is the step most documentation underemphasises. Workers execute compilation in a chroot, and that chroot is populated from the compiler tarball produced by icecc-create-env. If the tarball isn't correctly generated and distributed, jobs silently fall back to local compilation with no error message — the build completes but runs entirely on the local machine.

Create and verify compiler environment tarball
# Create environment tarball for the local GCC version:
icecc-create-env --gcc $(which gcc) $(which g++)
# Output: something like /tmp/icecc.XXXXXX.tar.gz
# Test by listing contents:
tar -tzf /tmp/icecc.*.tar.gz | grep -E 'gcc|g\+\+'

Set ICECC_VERSION in your build environment to point to this tarball, or set it in /etc/icecc/icecc.conf for system-wide use. Workers cache the environment by hash — updating the tarball after a compiler upgrade automatically pushes the new environment to workers on the next job submission.

⚠ Common Pitfall

The most common reason distributed builds silently run locally is a mismatched or missing ICECC_VERSION path. Check the Icecc scheduler log (journalctl -u icecc-scheduler) while triggering a build — if jobs appear in the scheduler log as "local only", the environment tarball is not being provided. Also check that the compiler wrapper symlinks are first in PATH: which cc should resolve to /usr/lib/icecc/bin/cc, not the system compiler directly.


Integrating with build systems

Make integration uses the -j flag with a count matching total available distributed cores, not just local cores. Icecc can report the total available slots across the network:

Build with distributed job count
# Check available distributed slots:
icecc --show-scheduler 2>/dev/null

# Use a high -j value — icecc queues appropriately:
make -j$(( $(nproc) * 4 )) CC=icecc CXX=icecc

# Or set via MAKEFLAGS in your shell profile:
export MAKEFLAGS="-j32"
export CC=icecc
export CXX=icecc

CMake projects can set RULE_LAUNCH_COMPILE and RULE_LAUNCH_LINK to route compilation through Icecc. The icecream package also provides a icerun wrapper for non-distributable steps.


Monitoring with icemon

icemon (the Icecc monitor) provides a real-time graphical view of job distribution across workers. Each machine in the network appears as a node, jobs flow as animated arcs, and idle/busy state is visible at a glance.

⬡ Observed Behaviour

In practice, the icemon visualisation is most useful during initial setup to confirm that jobs are actually leaving the submitting machine. A common early-setup pattern is seeing all jobs remain on the local node — the graph immediately shows this, whereas a build log alone won't reveal it. Once distribution is confirmed working, the monitor is less necessary for day-to-day use but useful for capacity planning: you can see which workers are consistently under or over capacity during large builds.

Install and run icemon
sudo apt install icemon
icemon

Practical expectations

On a network with four 8-core machines and a project generating several hundred translation units, a cold build that takes twelve minutes locally often completes in three to four minutes with full distribution. The benefit scales with translation unit count and is most pronounced for large C++ templates-heavy codebases where each .cpp file takes multiple seconds to compile. Link steps are not distributed — linking still runs locally — so the speedup is bounded by link time. Incremental builds that recompile a handful of files see less benefit, though any remote job that saves a few seconds of local CPU time is still useful during interactive development cycles.