[Aboriginal] Fwd: Re: Aboriginal. Wow! and Thanks!

Rob Landley rob at landley.net
Sat Jun 25 15:49:51 PDT 2011



-------- Original Message --------
Subject: Re: Aboriginal. Wow! and Thanks!
Date: Sat, 18 Jun 2011 20:59:41 -0500
From: Rob Landley <rob at landley.net>
To: Paul Kramer <kramerica at me.com>

On 06/15/2011 06:51 PM, Paul Kramer wrote:
> 
> On Jun 14, 2011, at 6:18 PM, Rob Landley wrote:
> 
> 
>> 
>> Make combines imperative and declarative code in the same file, and
>> then people try to beat some semblance of order out of it with
>> recursive make (the definitive takedown of which is 
>> http://aegis.sourceforge.net/auug97.pdf ) and then people write 
>> extensive systems like kbuild on top of it...
> 
> I read this probably.. in the early 00's. Very well explained, I
> learned some things about 'make' I did not know...

Other fun things about the current ./configure ; make; make install:

Configuration systems (make menuconfig) specify _what_ to build.  Make
is unaware that these suckers even _exist_, so its' incremental build
thing has to be worked around even more.  Makefiles got extended into a
programming language because the original design of make did not do what
people needed, but it's a REALLY BAD programming language designed
around assumptions that no longer remotely match reality.

95% of what the ./configure stage does is useless.  Things like the LP64
standard (see the 32/64 bit section of
http://landley.net/code/toybox/design.html for a writeup on that) mean
that a lot of things configure probes for ARE STANDARDIZED NOW, or can
be determined at runtime from any of the 8 gazillion compiler built-in
#defines you can see a dump of if you go:

  cc -dM -E - < /dev/null

Things like __linux__ and __x86_64__ you do NOT need to probe for, nor
do you need to specify on the command line when compiling natively.  the
fact the FSF can't wrap its head around this is because THEY SUCK AT
WRITING SOFTWARE.  (As evidenced by the fact that any time that
political organization gets its hands on a software package, the package
bloats to several times its original size.  This is like a car
manufacturer buying volkswagen's "bug" line and turning into a rolling
living room with fins: using more metal to build a car is not an
improvement, using more bytes to build a program is not an improvement.
 The FSF is to software what 1970's detroit was to cars.)

> But if I do another system, I'll use his style of a top level
> Makefile and each leaf node having a build.mk ... I'd leverage from
> the approach I use to a node build.mk include file might look
> like... especially for firmware teams, there stuff is easy to setup.

If you use another system, please don't use "make".

> In the end though... we need a new build tool still... or a build
> modeling tool that spits out build system...  i think the targets are
> easy... but the variants of properties is where we need
> non-declarative language to describe....
> 
> you know... there is a part of me that even wonders why even do the
> build.mks, just one makefile... or why do the build.mks need to live
> at the leaf node? they can have a namespace just by foo.mk...

I do shell scripts.  The shell scripts build the project just fine.  The
limit to this approach is I need more parallelism because 32-way SMP in
laptops is inevitable.

(Moore's Law is driven by photolithography die size shrinks, meaning
manufacturing process improvements double your transistor density every
18 months or so.  This naturally speeds up the chip because shorter
trace lengths take less time for a signal to travel down, and because
physically smaller components hold fewer electrons so it takes less
signal to fill 'em up.  However, chips need to be a certain minimum size
to reliably connect up all the I/O wires you need, and you have to
spread out your high-performance bits with less-used padding (like cache
memory) or else cooling the chip becomes impossible when your die size
shrinks to 1/4 the area but your power consumption only goes down 50%...
 So that means your transistor budget keeps going up.  Intel's design
team's job is to convert those increased transistor budgets into more
performance.

And thus we had the CISC/RISC fight of the 1980's and early 90's which
let us cram multiple execution units in the same processor and have teh
second (and even third) look over the shouder of the first and see if it
could execute the next instruction in the same clock cycle.  We had
pipeline reordering, branch prediction and speculative execution
requiring multiple register profiles with register renaming (with
hyper-threading falling out of that eventually as it reached its logical
conclusion).  And of course L2 cache sizes expanding until we hit a
couple MEGABYTES.

But there came a point of diminishing returns (See Itanic and Pentium 4)
where adding more transistors to the same chip just didn't work anymore.
 (Part of it was Intel being stupid, luckily the Israel design team
found a back door through which they kicked the India design team's ass,
and thus modern chips are descended from Pentium M and NOT from Itanic
or P4.)

What do you do when you can't add more transistors to one chip, but
photolithography density improvements keeps doubling your transitor
budgets every few years?  (And you see a big fall-off in performance
gains between 1 and 4 megs of L2 cache, because most programs' working
sets just aren't that big.)  You make multiple CPUs out of 'em, which
lets you put even more L2 cache on the chip to soak up those transistor
budgets...

Of course most software isn't taking advantage of this either, but you
can do one processor for the CPU and another for your application, and
then one for your flash plugin, one for your web browser, one for the
OS, gets you an excuse for 4-way...

The FUN part is that while software catches up, you can add 3D
accelerators to the die.  Make those extra cores GPU instead of CPU,
which lets you use your L2 cache as texture memory (and thus happily use
128 megs or more), that'll soak up transistor budgets for a while.  And
thus AMD bought ATI (doo dah, doo dah) to keep up with Intel which had
40% of the graphics market share (all the low-end) BEFORE it started
integrating the GPU on die, and this left Via in the cold but they
responded by tying up the game console market and then partnering with
TI to becoming the default graphics chip of ARM...

Fun fun fun.  What was the original topic again?

Oh yeah, make sucks but its replacement must do SMP.

> Even after I look at what I wrote... on 2nd thought I think I'd still
> do this

Look at the busybox and linux attempts at building the entire tree in
one go.  (I can try googling for these if you can't find 'em easily,
they were back around... 2006 maybe?)

Oh, here's another random reason make breaks stuff:

  http://www.busybox.net/FAQ.html#touch_config

Rob

 1309042191.0


More information about the Aboriginal mailing list