[Aboriginal] i cannot believe how much you've done.
rob at landley.net
Sat Jun 25 15:55:18 PDT 2011
On 06/24/2011 11:48 PM, Paul Kramer wrote:
> is was reading your presentation.html... well i'm drinking some vino
> and just skimming over it and look forward to really diving in
> I was thinking about his step... what if the .o is left wherever it
> was compiled at... I'm wondering what the reads across the network
> would be during a link? Since we are copying and then linking... if
> we don't copy, and just link...
You mean SNAPSHOT_SYMLINK=1 from config at the top of aboriginal linux?
You fault in the dentry info when you do that (filesystem metadata), but
not the file contents. Those get paged in on demand as the build uses
them. Meanwhile, build/packages can be a symlink to a remote filesystem
(NFS even) without this introducing huge overhead/instability into the
build, because the .o files go to the local filesystem (whatever the
rest of "build" is on).
There's an explanatory comment in front of each config entry. I really
should link "config" from the website when I'm pulling out all the
README files and turning them into web pages. It explains all the knobs
and levers you can use to control the build.
> • When told to compile .c files to .o files • calls the preprocessor
> on each .c file (gcc -E) to resolve the includes • Sends the
> preprocessed .c file through the network to a server • Server
> compiles preprocessed .c code into .o object code • Sends the .o file
> back when done • Copies .o file back through network to local
> filesystem • When told to produce an executable • calls the local
> compiler to link the .o files together
That's how distcc works in its default mode, yes. I keep meaning to
write my own distcc tool to have more control over the process, it won't
always distribute builds in circumstances where it _can_...
> ok... i'm just talking here.... but WindRiver and MontaVista
> environments are nothing to write home about... speed is so valuable
Speed is valuable but so is engineering time, and Moore's Law only helps
with one of those.
A quarter-century ago on 80386 processors we needed every ounce of speed
we could get out of the build. Today on quad xeons we need every ounce
of speed we can get out of the build, except that our resources have
increased by A DOZEN ORDERS OF MAGNITUDE.
Look at the system Linux started on circa 1992. 16 mhz 386, 4 megs of
ram, 40 megabyte hard drive that didn't even do DMA. The 386 was a
single execution unit, multiple clocks per instruction, not even
pipelined, and no L1 or L2 cache. Now we have processors clocked 100
times as fast, with tens of kilobytes of L1 and megabytes of L2 cache,
pipelined with hardware prefetch and instruction reordering and register
renaming and branch prediction/speculative execution and so on, three
execution units per core, up to four cores per chip in _laptops_. A
system with only 1 gigabyte of ram is _dinky_, and a terabyte of disk is
Screaming about this being NOT FAST ENOUGH says to me "hang on,
somethign is wrong with your development process". Trust me, it's not
> I interviewed at XXX... there environment is brutal
> and has always been brutal... so brutal == no speed. seems like there
> is room for you to compete at some level against these companies
I'm doing it because it's the right thing engineering-wise, not because
I get paid to. I've had 3 or 4 jobs where I got to use Aboriginal at
work (it was always my suggestion to do so, bringing in the toolkit I
was familiar with). Nobody ever approached me to sponsor it, although
sometimes they appreciated it enough to give me some paid hours to
expand it to do Thing Du Jour that they found useful.
> just providing a service to bootstrap and provide a build environment
> could be interesting.. just rambling here...
That's pretty much what it does. Bootstrap a native build environment
for new hardware, either under an emulator or on the actual board.
> here is what i've always thought... one way to compete is to out
> manufacture the competitors, and in software what that means is get
> their 50% faster than competitors... i've worked in enviornments
> where a VP had to okay... $32.00 of disk space... and our offices was
> right across the street from fry's...
At Qualcomm I bought a USB to serial adapter on my lunch break. I just
didn't TELL anybody.
My current gig is at polycom, and one of the managers there has a
company credit card which he's happy to use to order stuff from amazon
and such (and even pays for overnight shipping).
> i just finished a gig at XXX a few months back... IT would not approve or allocate more
> than 8GB RAM and 200 GB disk on their servers... no shit... after 5
> months of dicking around... VP of Eng finally okayed 3 * $1K dells...
> so I could spin up static analysis... on so many levels silicon
> valley has really lost it's core technology talent and engineering
> discipline.... most of the hiring is to all these social networking,
> gaming companies....
As the presentation says: cheap commodity x86 hardware, plus QEMU to
turn it into a native development environment for your board. If you
spill coffee on it, replacement time is about half a day to buy new
hardware, set it up, install xubuntu LTS on it, and copy your files back
over. (I actually keep current xubuntu LTS on a USB key for easy
More information about the Aboriginal