[Aboriginal] Load limiter patch

Rob Landley rob at landley.net
Tue Oct 15 18:53:17 PDT 2013


On 10/01/2013 06:16:06 AM, Patrick Lauer wrote:
> Hi all,

Hello! Massively behind on email, sorry about that. Working through the  
backlog...

> I noticed some, ahem, suboptimal behaviour on machines with lots of
> memory when FORK=1 is used.

Used to do... more/buildall.sh?

> Problem: the current algorithm sets the number of parallel jobs to
> memsize/512MB. On a machine with e.g. 32GB that's 64 jobs, since there
> are 17 targets currently that gives a per-target parallelism of 3  
> which
> adds up to 51 parallel jobs ... on a machine with 4 or 8 CPU-cores.
> 
> You can imagine how that's, err, not happy.

The limit was really there to avoid swap-thrashing, not to optimize CPU  
utilization.

Some portions of the build are serialized, such as tarball creation and  
building the ancient bash package that fails with -j higher than 1.

Anything doing link time optimization is also going to grind through  
very fast "compilation" stages that are barely more than the "gcc -E"  
preprocessing distcc does, and then do all the actual work when he  
linker calls a plugin back to the compiler to do the actual code  
generation on the whole tree at once: this deferred code generation is  
unfortunately a single processor operation. The FSF loonies did this  
rather than rewrite all the makefiles out there to feed all the *.c  
files to the compiler at once instead of calling the compiler a zillion  
times with one file at a time, and then the linker was the only thing  
that actually saw _everything_, so if they wanted the optimizer to see  
the whole program in one go they had to defer the actual code  
generation pass to have the linker do it, or admit  
"autoconf/automake/gmake" is a deeply broken paradigm. (Never admit  
mistakes and backtrack, you must DOUBLE DOWN. That's the FSF way.)

> So I tried to add a ceiling at 2*CPU, which seems reasonable to me -
> still allows for good parallelism while not stupidly overwhelming the
> system. I haven't yet tested it extensively, so there might be some
> stupid included - but it appears to select the right limit.
> 
> Enjoy the attached patch,

Does this take into account what the rest of the system is doing? I.E.  
if buildall.sh fires off a dozen build.sh instances in parallel,  
they'll work it out among themselves automatically?

(I trust gnu code about as far as I can throw it, and thus haven't  
really told make to handle this for me, but if it actually works...)

Rob

 1381888397.0


More information about the Aboriginal mailing list