[Aboriginal] Load limiter patch
patrick at gentoo.org
Tue Oct 15 23:16:58 PDT 2013
On 10/16/2013 10:07 AM, Rob Landley wrote:
> On 10/01/2013 06:16:06 AM, Patrick Lauer wrote:
>> Hi all,
>> I noticed some, ahem, suboptimal behaviour on machines with lots of
>> memory when FORK=1 is used.
>> Problem: the current algorithm sets the number of parallel jobs to
>> memsize/512MB. On a machine with e.g. 32GB that's 64 jobs, since there
>> are 17 targets currently that gives a per-target parallelism of 3 which
>> adds up to 51 parallel jobs ... on a machine with 4 or 8 CPU-cores.
>> You can imagine how that's, err, not happy.
> Ok, now that I've read your actual patch, let me write a different
> response. :)
> When you said "load limiter" I thought you were using the "-l LOAD"
> option of make, I.E. --load-average. But you're just capping CPUs, and
> the way you're doing it is an integer division:
> CPUS=$(( $REAL_CPUS / $TARGET_COUNT ))
> Means that if REAL_CPUS is 8 and TARGET_COUNT is 12, you get 0 (so each
> target basically gets $REAL_CPUS). Or if REAL_CPUS is 16 and
> TARGET_COUNT is 12 you get 1 and leave 4 processors idle...
Yeah. You could add a fudge factor there so it overshoots a bit more ...
without limiting you're going to run a local DoS ;)
For my usecases it seems to work out quite well with a maximum of
2*REALCPU as upper limit, so on a 16-core box you'd limit to 32.
Right now worst case you end up with 17 targets * N jobs paralleism per
target, which is 17*3 with 32G or 17*6 with 64G.
Unless you add nice+ionice by default that'll be very unpleasant :)
(Actually - I started working on this patch because I got some spurious
performance warnings from my server, which accidentally correlate with
the times when you were logged in. So, I guess, maybe limiting resource
use would be more awesome. Not that it's really causing problems ...
More information about the Aboriginal