[Aboriginal] What's musl, anyway?

Rob Landley rob at landley.net
Mon Oct 17 05:02:34 PDT 2011


On 10/16/2011 12:16 PM, John Spencer wrote:
> On 10/13/2011 07:27 AM, Rob Landley wrote:
>>
>> Did anybody build Linux From Scratch against i686 musl?
> 
> the bootstrap-linux i linked in my previous mail should be about the same

Heh.  Not even close.

My basic Aboriginal Linux build creates the smallest and simplest Linux
system capable of rebuilding itself entirely from source code.  I.E. I
can boot into this sucker (on real hardware or under QEMU) and re-run
the build in there, and get a usable out of it.  It sets up the minimal
amount of infrastructure required to self-host.  (The only optional bit
is distcc, and I could trim down the busybox .config to a smaller subset
instead of "defconfig" but that actually _adds_ complexity to the build
so I stopped doing it.)

The fact the resulting root filesystem is capable of rebuilding itself
from soruce under itself _implies_ that it should be powerful enough to
build any other software and installing it natively... but whether or
not that's actually _true_ involves corner cases of the C library,
compiler, and the linker that that core set of packages don't
necessarily exercise.  (For example, dynamic linking isn't strictly
required for a self-bootstrapping system but you haven't been able to
build X11 without it since sometime during the clinton administration.
Dynamic linking isn't picky about link order and doesn't mind duplicate
symbols, static linking is and does.)

Linux From Scratch 6.7, chapter 6 (building the new system), is over 4
dozen GNU packages.  I automated the Linux From Scratch build using the
aboriginal linux "build control image" infrastructure, and built a new
chroot dir out of those 4 dozen packages using the toolchain and C
library in aboriginal linux.  See:

  http://landley.net/aboriginal/control-images

The lfs-bootstrap.hdc build control image natively builds all those
packages, in order, under the aboriginal linux root filesystem.  It
configures them, makes them, and installs them.  (By default it
gradually replaces the busybox commands a few at a time, but you can
also have it install its stuff in the $PATH after busybox, so the new
commands don't get used if there's already a busybox version.)

This is essentially driving sheep across a minefield: if something is
wrong with the build environment (compiler, C library, or command line
tools) then one or more of the packages will break.

I've actually built a little under 90 packages using this technique
(Linux From Scratch and then a chunk of beyond Linux From Scratch,
getting X11 infrastructure and a bunch of clients up, remotely attaching
to an external X server to play xchess and run xeyes and have an xterm
and so on, part of a demo showing that the qualcomm hexagon port of
linux worked reasonably well and was worth further funding.  Alas, that
was behind the scenes at qualcomm and I had to reimplement it after I
left.  I've redone the LFS build, but not the BLFS stuff yet.)

The fact that musl can give me a shell prompt does not mean:

A) you can natively rebuild under that environment to be self-hosting.

B) you can port something like knoppix to musl.

>> I'd love to have a union filesystem merged upstream, but I'm doing
>> vanilla packages.  If it's not in vanilla, it's out of scope for this
>> project. :)
>
> indeed, aufs has to be built as a separate package.

Then I'm not blocking you. :)

>>> i thought about adding support for that to my
>>> build manager, as it turned out that decompressing the kernel each time
>>> i build it eats a lot of time (about 10 minutes in qemu).
>> Decompressing the kernel on my _host_ laptop takes over a minute.  It's
>> kind of enormous.  That's why I implemented the package cache:
>>
>>    http://landley.net/aboriginal/FAQ.html#debug_source
>>    http://landley.net/aboriginal/FAQ.html#debug_package_cache
>>    http://landley.net/aboriginal/FAQ.html#debug_working_copies
>>
>> Conceptually you can gloss over all that as "extract and patch tarball,
>> bulid it, then rm -rf when you're done", but using the package cache is
>> way the heck faster than that for repeated builds.  (Both of which are
>> why I put up with the complexity of doing it.)
> 
>>> doing the
>>> untaring in a separate step and overlay it with a build-only directory
>>> would allow simply trashing the built stuff and restart clean without
>>> having to untar again (since "make clean" won't remove patches, and
>>> so on).
>> Um, yes.  That's the package cache.  (cp -l is a marvelous thing.  The
>> config entry SNAPSHOT_SYMLINK can do symlinks though, and explains why
>> you might want to.  Yes, I tried it both way and wound up making it
>> configurable.  The build control images use symlinks because they're
>> crossing filesystems, since /mnt is bind mounted from the squashfs but
>> /home is generally ext3.)
>>
> 
> good idea. however it doesn't solve the patch issue.
> i'd prefer to be able to go back to the vanilla source code ...

Um, which patch issue?

The sources/patches directory contains patches.  Those patches are
applied to the extracted tarballs.

Each time the build needs to create a snapshot of the package cache, it
checks the md5sum of each component (the tarball and each patch) against
the md5sums stored in the "sha1-for-source.txt" file in the extracted
master copy of the source.  If that set of md5sums cancel out, it does
the "cp -l" trick to set up a new snapshot and life goes on.  If the set
of md5sums does NOT cancel out (patch added, removed, or changed, or the
tarball's different), it does an rm -rf on the old source directory and
then re-extracts and re-patches it, then does the snapshot.

(Note, the sha1sum is appended ot the file _after_ the tarball
successfully extracts or the patch successfully applies, meaning if
there's a glitch at that level it'll automatically retry next time
because the set of sha1sums won't match up.)

(Note, you can edit the FILES in the package cache all you want for
debug purposes, it just checks the upstream ingredients it made the
directory from, not any hand hacking you've done since then.  Then rm
-rf the directory when you're done and it'll re-create it.  I cover that
in the above FAQ stuff.)

>> Actually I think the current bottleneck is that the network's only dong
>> 300k/second even with a gigabit interface.  (10baseT is 1100ksecond,
>> 100baseT is 11,000k/second.  That's RIDICULOUSLY slow, and it didn't
>> _used_ to be, trying to figure out what regressed.
> indeed.
>> Possibly linux 3.0
>> issue fighting with the emulator?  Need to track it down...)
> from what i've read that's the issue with the kind of network support
> that aboriginal uses.

Um, you mean "-net user"?  I don't think so, I've gotten that to do over
a megabyte per second before...

> there's another method which involves some iptable rules which
> apparently should be much faster and even allows icmp.

And which requires root access on the host, which my project does not.


>> Distributing the preprocessing reintroduces the possibility of version
>> skew.  It requires each distcc node to have compatible headers _and_ to
>> get the header search paths correct on a system that has more than one
>> set of headers installed.  I.E. it's inherently less reliable for what
>> we're doing.
>>
> 
> the "pump" stuff is supposed to send the headers from the host to the
> distcc server...

Given the utter horror of gcc's path logic, how the hell does it FIND them?

A glitch I need to fix right now is that CPUS is not being set right
when distcc is set up.  It's supposed to do CPUS=3 and it's doing
CPUS=1, and that doesn't overlap preprocessing and host network
access/compilation.  (You need at least CPUS=2 and in my testing CPUS=3
was a sweet spot, although it was a while ago and I need to retest...)


>> They implemented a horrible name mangling thing requiring them to break
>> the linker _and_ gdb, and they couldn't avoid conflicting with existing
>> symbol names?
>>
>> That's just sad.
>>
> i think it's just wchar_t being a keyword in C++ or somesuch. (at least
> with the clang libstdc++ thing).
> from what i've heard, the gnu impl. additionally makes assumptions about
> mangled names from types such as pthread_t, so if those are differently
> implemented, they get another mangled name inside C++ declarations which
> use the type.
> don't beat me to it, i'm no C++ expert at all and haven't looked into
> the issue myself.

I was a C++ expert around 1993, then the added templates to the language
and I went "that's just peverted" and went back to C.  I've had to debug
some deep weirdness since then, and every time I come away with fresh
scars and a deeper conviction that it's an utterly horrible language
that KEEPS MANAGING TO GET WORSE.  They have to invent new ways to suck,
and they _do_...

>>>>>>> i was told that it is faster and takes way less ram than gnu ld.
>>>>>>> (which is not very surprising, given the quality of average gnu
>>>>>>> code)
>>>>>> The tinycc linker is also buckets faster and less memory intensive
>>>>>> than
>>>>>> gnu ld.
>>>>> how about this one ? http://code.google.com/p/ucpp/
>>> actually that is a *preprocessor*... we were talking about ripping out
>>> the tcc preprocessor somewhere else...
>> tinycc is a self-contained all in one compiler.  It's got a
>> preprocessor, a compiler, an assembler, and a linker, all in one binary.
>>   (It still needs stuff like objdump.)
> 
> doesn't its one-pass compilation disallow any serious optimization ?

Yup.  But you can still do basic dead code elimination, constant
propogation, expression simplification, and so on.

And keep in mind tinycc has been used to build gcc:

  http://www.dwheeler.com/trusting-trust/

Once you've got a self-hosting native environment, you can install
additional packages to arbitrarily complicate it.  The important bit is
to solve the bootstrapping problem SEPERATELY, and then make adding
additional features natively a self-contained thing that doesn't have
magic cross compiling dependencies sticking to it.

> apart from that, i'd really like a tcc which would support most targets
> and a full C99 spec.

It's pretty close, it's just grisckha hijacked it back to come up with
an "official" version that's a windows toy, and I didn't feel like
fighting the owner of the tinycc.org domain.

When that project dies, I may regain interest.  Until then, they can go
however many years between releases they like.

>>>> I expect gcc's days are numbered but displacing it is like displacing
>>>> windows: it's an entrenched standard zealously defended by egocentric
>>>> bastards out to impose Their Way upon the world, so there's work to do
>>>> getting rid of it:
>>>>
>>>> http://flyingbynight.wordpress.com/2011/02/28/phnglui-mglw-nafh-cthuhu-riv-wgah-nagl-fhtag/
>>>>
>>>>
>>> i was looking lately at the tendra compiler, which has just been
>>> uploaded to https://bitbucket.org/asmodai/tendra/
>>> that seems like a well designed compiler with good optimization
>>> potential.
>> I went through a dozen open-sourceish compilers once.  Most were
>> incomplete and kind of crap.  I vaguely recall that one, don't remember
>> what was wrong with it, this was a few years back...
> not much. it even has codegens for the most important platforms, i think
> ARM is the only one missing.

I consulted Wikipedia's opinion, which said:

> In August 2003 TenDRA split into two projects, TenDRA.org and
> Ten15.org. Both projects petered out around 2006–2007.

And they link to archive.org for information about the compiler.  That
puts it pretty much in the same boat as openwatcom.

> it even supports 68020+.
> but the build system is kinda broken. once that's repaired, it'd be a
> hot candidate.

Repaired by whom?  (Are you doing it?)

Does it have a repository, a mailing list, and releases?

> ah yes, it seems to lack some C99 features as well.
> from all the compilers (written in C) i looked at, imo this it the one
> with the biggest potential and least amount of work involved to get it
> to replace gcc.

Who is doing that work?

When I made that assessment for myself (circa 2006) I saw tinycc as
having the shortest path to gcc replacement, and I dug in, and all I
managed to do was revive the old zombie project and be overshadowed by
it.  I'd shake other developers out of the woodwork doing things like
upgrading windows support and they ALWAYS did it against the version on
tinycc.org, not my version.  The last straw was x86-64 support, a big
batch of fresh development (which I was only about 1/3 of the way
through doing myself), which was a patch against the CVS version, not
against mine.

That was my "throw in the towel" moment.  All my effort was just making
people take up the OLD project, minus all my cleanups.  And years later
they're going "we should be able to specify the header search paths at
build time" and I go "I did that in 2008, see here", and it turns into
yet another flamewar:

http://lists.nongnu.org/archive/html/tinycc-devel/2011-10/msg00000.html

And the hilarious parts is that the months where it DOESN'T have a
flamewar look like this:

http://lists.nongnu.org/archive/html/tinycc-devel/2011-06/threads.html

(And I see that since I gave grischka the last word he hasn't posted to
the list for 2 weeks.  He's the project's "maintainer".  You wonder why
I gave up on it?)

Anyway, after that experience I'm not particularly looking to get back
into compiler development just now.  I'll test a project that has
momentum but I'm not playing Sysiphus with somebody else's ball, when I
fully expect them to take that ball and go home if I get it rolling.

(Yeah, I know, mixing metaphors with an eggbeater...)

>>> it uses some internal intermediate format (which is even standardized),
>>> which then gets optimized according to the target.
>>> however the build system is pretty outdated and i was unable to
>>> bootstrap it in its current state.
>>> i hope some activity on that project will start soon...
>> I vaguely remember watcom got open sourced as abandonware a while ago.
>> Having code with no community isn't very useful.
> 
> for some reason i can't remember, i haven't considered watcom as a
> serious candidate.

Again, no development community.  It was abandonware, code with no
developers.  It was not a live project, it was a snapshot in time of
something nobody worked on anymore.  It wasn't a small and simple thing
you can read through in an afternoon, but there's nobody to ask
questions about its design anymore, and if you do make your own changes
where do you submit them?  Will it ever have another release?  Or do you
just maintain your own forked version on top of everything else you have
to do?

I did a survey on open source compilers a few years ago.  Since then,
open64.net has shown more life than I expected (its most recent release
was April), and pcc has shown less.

>>>>>> Too bad the maintainer of that project is a windows developer,
>>>>>> if it would just DIE I'd restart my fork but I refuse to fight
>>>>>> zombies
>>>>>> without a cricket bat.
>>>>> hmm iirc the mob branch is the only active. it's not even that bad, i
>>>>> checked it out lately and it built stuff which tcc 0.9.25 refused
>>>>> to build.
>>>> My tree also built a lot of stuff 0.9.25 refused to build.  That was
>>>> sort of the point of my tree.
>>> really unfortunate that you stopped it. but i have read your statement
>>> and respect your reasons.
>>> in the meantime, an uncoordinated "mob" drives the project on
>>> repo.or.cz ;)
>> And you understand why that's a bad thing, right?
>>
> 
> it's not a bad thing per se, but a clear strategy and some management
> wouldn't hurt.

Open source is a publishing project, just like a magazine.

In a magazine, you have a slush pile which demonstrates Sturgeon's Law:
90% of it is crap, submitted by freelancers hoping to be published.  An
editor goes through the slush pile and cherry picks the best few
submissions, cleans them up and stitches them together, and puts out the
next issue.

This pattern repeats itself all over the place.  The internet is
primarily porn, cat pictures, and emo blogs: google searches show you
the top ten hits out of the hundreds of thousands containing the words
youw ant.  Link aggregators like slashdot and fark and such fight off
sturgeon's law by taking in thousands of submissions of possibly
interesting links and publishing the top dozen or two each day.  Linux
distros like Koppix put out a CD with around 600 packages out of the
tens of thousands on sourceforge.

With a real editor, a hand-written rejection letter is a GOOD thing
because it means your submission showed potential, and the editor is
encouraging you to retry, sometimes with guidance about what needs to be
fixed to make it acceptable to them.  If Linus Torvalds replies to your
patch and tells you to make changes, it's ENCOURAGEMENT even if he's
ripping it a new one, because you GOT HIS ATTENTION.

A mob branch publishes the raw slush pile, with no editorial control, no
architectural oversight, no direction, no design guidance, no saying
"that's not within the scope of this project" before Zawinski's Law
kicks in and the thing's suddenly a chat service with its own email like
World of Warcraft.  If nobody is in charge, the old parable about "too
many cooks" explains what happens.

Sure, maintainers only really have veto power over a volunteer developer
community, but that's a lot if used intelligently.  When Alan Cox says
"a maintainer's job is to say no", this is what he's talking about.

>>>>>> You have to add said flag to the preprocessor as well, it sets a
>>>>>> bunch
>>>>>> of symbols telling you about your current target.  Go:
>>>>>>
>>>>>>     gcc -E -dM -<   /dev/null | less
>>>>>>
>>>>>> Now do that for each of the cross compilers and diff the result. :)
>>>>> ... thats quite a list.
>>>> Now ponder that list and go "hey, it tells me my current target and my
>>>> current endianness... what is half of ./configure for, anyway?"
>>> indeed.
>> Not to mention #include<endian.h>  and...
>>
>> Sigh.
> 
> autoconf is an archaic, broken idea, which is obsoleted with standards
> such as C99.

Chunks of it were obsoleted by standards such as LP64.  (I linked to
that from the end of http://landley.net/code/toybox/design.html which is
a general rant about software design from back when I felt like swimming
upstream against the busybox development community.)

> if your system doesn't comply to that, make your vendor support it or
> use another system.
> it's not the job of the software supplier to have a backup for every
> libc function/type.

Yup.  And yet I even have to argue the busybox guys out of this
sometimes.  Sigh.

The Linux Weekly News guys call this "the platform problem":

  http://lwn.net/Articles/443531/

> how many hours in my life have been wasted in the fight against this
> broken crap ?
> if it was at least readable and not thousands of line long, with broken
> indentation all over.
> 
> it should just die.

I take a more active "Kill it with fire" approach, but convincing
projects not to use it is hard.  (Dropbear didn't used to use it, and
then started.  That was a sad day.  Several people suggested busybox use
it during my tenure: my response was not polite.)

>>>> Add in
>>>> "what is libtool for", or the fact that pkg-config is so obviously
>>>> stupid you can sum it up with stick figures http://xkcd.com/927/
>>>> (because rpm and dpkg and portage and pacman and bitbake and so on
>>>> weren't _enough_), and PKG_CONFIG=/bin/true is a beautiful thing...
>>>>
>>> PKG_CONFIG=/bin/true<- that actually works ?
>> Sometimes.  (It returns success and says that whatever it is was NOT
>> installed.  If your build is ok with that, great.  If not, maybe
>> PKG_CONFIG="echo" or something might work, I haven't tried it.)
> 
> mhmm. another idiotic system one has to learn to find out how to
> circumvent it.

At least it's not HAL.

>> Half of cross compiling is working out exactly how to lie to ./configure
>> _this_ time around.
>>
> indeed. *sigh*

Hence the "We cross compile so you don't have to" motto in the title
bar.  Getting to the point where you can natively compile under
emulation means you don't have to cross compile any more after that
point, because cross compiling sucks.  (Using distcc does _not_ re-open
the cross compiling can of worms; I wouldn't do it if it did.)

>>>> Anyway: I need to make a NOP libtool that I can symlink the others to,
>>>> which just converts the command line arguments to ld format and does
>>>> NOTHING ELSE.  A small shell script, probably.  Then ln -sf as needed.
>>> i shall appreciate it as well.
>> You see how my todo list gets insane?
> 
> well, at least that one should be possible to finish at a free
> afternoon, if you know what it should do.

I happily accept patches. :)

>>>>>> Um, send me a link again?
>>>>> http://lists.busybox.net/pipermail/busybox/2011-July/076293.html
>>>> Ah yes.  I'll try to get to it this weekend.
>>> thanks for the patch. will test it as soon as the kernel compile is
>>> finished...

Did it work, by the way?

Rob

 1318852954.0


More information about the Aboriginal mailing list