[Aboriginal] What's musl, anyway?
maillist-aboriginal at barfooze.de
Sun Oct 16 10:16:46 PDT 2011
On 10/13/2011 07:27 AM, Rob Landley wrote:
> Did anybody build Linux From Scratch against i686 musl?
the bootstrap-linux i linked in my previous mail should be about the same
> I'd love to have a union filesystem merged upstream, but I'm doing
> vanilla packages. If it's not in vanilla, it's out of scope for this
> project. :)
indeed, aufs has to be built as a separate package.
>> i thought about adding support for that to my
>> build manager, as it turned out that decompressing the kernel each time
>> i build it eats a lot of time (about 10 minutes in qemu).
> Decompressing the kernel on my _host_ laptop takes over a minute. It's
> kind of enormous. That's why I implemented the package cache:
> Conceptually you can gloss over all that as "extract and patch tarball,
> bulid it, then rm -rf when you're done", but using the package cache is
> way the heck faster than that for repeated builds. (Both of which are
> why I put up with the complexity of doing it.)
>> doing the
>> untaring in a separate step and overlay it with a build-only directory
>> would allow simply trashing the built stuff and restart clean without
>> having to untar again (since "make clean" won't remove patches, and so on).
> Um, yes. That's the package cache. (cp -l is a marvelous thing. The
> config entry SNAPSHOT_SYMLINK can do symlinks though, and explains why
> you might want to. Yes, I tried it both way and wound up making it
> configurable. The build control images use symlinks because they're
> crossing filesystems, since /mnt is bind mounted from the squashfs but
> /home is generally ext3.)
good idea. however it doesn't solve the patch issue.
i'd prefer to be able to go back to the vanilla source code ...
> Actually I think the current bottleneck is that the network's only dong
> 300k/second even with a gigabit interface. (10baseT is 1100ksecond,
> 100baseT is 11,000k/second. That's RIDICULOUSLY slow, and it didn't
> _used_ to be, trying to figure out what regressed.
> Possibly linux 3.0
> issue fighting with the emulator? Need to track it down...)
from what i've read that's the issue with the kind of network support
that aboriginal uses.
there's another method which involves some iptable rules which
apparently should be much faster and even allows icmp.
> Also, tinycc's preprocessor is faster than gcc's, and the one in my fork
> at least was fairly compatible...
>> well, maybe some day a sane person (which happens to know python) will
>> come and port the distcc pump stuff to C, or just rewrite it from scratch.
> One of the great things about distcc is that coverting C99 .c into an
> elf .o is fairly resistant to version skew. All the stuff that has to
> match _up_ version-wise is in the headersand the libraries, all of which
> are handled on a single central node that has just one copy of everything.
> Distributing the preprocessing reintroduces the possibility of version
> skew. It requires each distcc node to have compatible headers _and_ to
> get the header search paths correct on a system that has more than one
> set of headers installed. I.E. it's inherently less reliable for what
> we're doing.
the "pump" stuff is supposed to send the headers from the host to the
> One problem that does need to be fixed is that distcc doesn't always
> distribute compiles (compile and link on the same line, for example, or
> turning more than one .c file into a combined .o file), and it doesn't
> tell you when it DOESN'T distribute so it's hard to figure out whether
> or not it's working.
> I.E. writing my own tiny distcc (extending ccwrap.c to do this,
> basically) is on my todo list. Probably requires my own daemon. I
> looked into it but it's not a half-hour job...
>>> You're aware of uClibc++, right? (My project builds it and all. :)
>>> It's a bit stalled right now, but I have Garret's phone number around
>>> here somewhere...
>>> What's needed _in_ the libc? Stack unwinding cruft?
>> from what i've heard there's special atexit/dynlink stuff needed and
>> partially some symbol names clash.
>> currently there's some discussion about that on the musl mailinglist.
> Ah right, global class instance constructors and destructors. I forgot
> about those.
> They implemented a horrible name mangling thing requiring them to break
> the linker _and_ gdb, and they couldn't avoid conflicting with existing
> symbol names?
> That's just sad.
i think it's just wchar_t being a keyword in C++ or somesuch. (at least
with the clang libstdc++ thing).
from what i've heard, the gnu impl. additionally makes assumptions about
mangled names from types such as pthread_t, so if those are differently
implemented, they get another mangled name inside C++ declarations which
use the type.
don't beat me to it, i'm no C++ expert at all and haven't looked into
the issue myself.
>>>>>> i was told that it is faster and takes way less ram than gnu ld.
>>>>>> (which is not very surprising, given the quality of average gnu code)
>>>>> The tinycc linker is also buckets faster and less memory intensive than
>>>>> gnu ld.
>>>> how about this one ? http://code.google.com/p/ucpp/
>> actually that is a *preprocessor*... we were talking about ripping out
>> the tcc preprocessor somewhere else...
> tinycc is a self-contained all in one compiler. It's got a
> preprocessor, a compiler, an assembler, and a linker, all in one binary.
> (It still needs stuff like objdump.)
doesn't its one-pass compilation disallow any serious optimization ?
apart from that, i'd really like a tcc which would support most targets
and a full C99 spec.
>>> I expect gcc's days are numbered but displacing it is like displacing
>>> windows: it's an entrenched standard zealously defended by egocentric
>>> bastards out to impose Their Way upon the world, so there's work to do
>>> getting rid of it:
>> i was looking lately at the tendra compiler, which has just been
>> uploaded to https://bitbucket.org/asmodai/tendra/
>> that seems like a well designed compiler with good optimization potential.
> I went through a dozen open-sourceish compilers once. Most were
> incomplete and kind of crap. I vaguely recall that one, don't remember
> what was wrong with it, this was a few years back...
not much. it even has codegens for the most important platforms, i think
ARM is the only one missing.
it even supports 68020+.
but the build system is kinda broken. once that's repaired, it'd be a
ah yes, it seems to lack some C99 features as well.
from all the compilers (written in C) i looked at, imo this it the one
with the biggest potential and least amount of work involved to get it
to replace gcc.
>> it uses some internal intermediate format (which is even standardized),
>> which then gets optimized according to the target.
>> however the build system is pretty outdated and i was unable to
>> bootstrap it in its current state.
>> i hope some activity on that project will start soon...
> I vaguely remember watcom got open sourced as abandonware a while ago.
> Having code with no community isn't very useful.
for some reason i can't remember, i haven't considered watcom as a
>>>>> Too bad the maintainer of that project is a windows developer,
>>>>> if it would just DIE I'd restart my fork but I refuse to fight zombies
>>>>> without a cricket bat.
>>>> hmm iirc the mob branch is the only active. it's not even that bad, i
>>>> checked it out lately and it built stuff which tcc 0.9.25 refused to build.
>>> My tree also built a lot of stuff 0.9.25 refused to build. That was
>>> sort of the point of my tree.
>> really unfortunate that you stopped it. but i have read your statement
>> and respect your reasons.
>> in the meantime, an uncoordinated "mob" drives the project on repo.or.cz ;)
> And you understand why that's a bad thing, right?
it's not a bad thing per se, but a clear strategy and some management
>>>> actually it is so broken that it defeats the purpose of the FSF, in that
>>>> it makes it too hard to change and recompile your free software, making
>>>> it unfree that way.
>>> The FSF has been self defeating for many years. If you want a RANT:
>>> (First part is background necessary to understand the second part. I
>>> keep meaning to write a book on computer history, but I have too many
>>> other things to do...)
>> the mail from rms about "making the code hard to use" to get evil
>> companies out is indeed shocking.
>> it seems GNU software sucks by design.
> Yup. He's been shouted down from that position more recently, but the
> damage is done and he doesn't really change his mind, he just changes
> his tactics sometimes.
>>>>> You have to add said flag to the preprocessor as well, it sets a bunch
>>>>> of symbols telling you about your current target. Go:
>>>>> gcc -E -dM -< /dev/null | less
>>>>> Now do that for each of the cross compilers and diff the result. :)
>>>> ... thats quite a list.
>>> Now ponder that list and go "hey, it tells me my current target and my
>>> current endianness... what is half of ./configure for, anyway?"
> Not to mention #include<endian.h> and...
autoconf is an archaic, broken idea, which is obsoleted with standards
such as C99.
if your system doesn't comply to that, make your vendor support it or
use another system.
it's not the job of the software supplier to have a backup for every
how many hours in my life have been wasted in the fight against this
broken crap ?
if it was at least readable and not thousands of line long, with broken
indentation all over.
it should just die.
>>> Add in
>>> "what is libtool for", or the fact that pkg-config is so obviously
>>> stupid you can sum it up with stick figures http://xkcd.com/927/
>>> (because rpm and dpkg and portage and pacman and bitbake and so on
>>> weren't _enough_), and PKG_CONFIG=/bin/true is a beautiful thing...
>> PKG_CONFIG=/bin/true<- that actually works ?
> Sometimes. (It returns success and says that whatever it is was NOT
> installed. If your build is ok with that, great. If not, maybe
> PKG_CONFIG="echo" or something might work, I haven't tried it.)
mhmm. another idiotic system one has to learn to find out how to
> Half of cross compiling is working out exactly how to lie to ./configure
> _this_ time around.
>>>>> You've learned never to install Libtool on Linux, right?
>>>> most stuff actually seems to come with its own libtool.
>>>> however i just found out that you have to delete *.la and magically
>>>> stuff builds, that failed earlier.
>>> Once upon a time I meant to do a libtool-stub that actually WOULD be a
>>> NOP successfully, like http://penma.de/code/gettext-stub/ and my little
>>> trick here:
>>> (lines 31-41).
>> "Autoconf is useless" ;)
> You've got to do the vogon voice, though.
let's just do it the vogon way and shoot it into space.
>>> Anyway: I need to make a NOP libtool that I can symlink the others to,
>>> which just converts the command line arguments to ld format and does
>>> NOTHING ELSE. A small shell script, probably. Then ln -sf as needed.
>> i shall appreciate it as well.
> You see how my todo list gets insane?
well, at least that one should be possible to finish at a free
afternoon, if you know what it should do.
>>>>> Um, send me a link again?
>>> Ah yes. I'll try to get to it this weekend.
>> thanks for the patch. will test it as soon as the kernel compile is
> That _really_ shouldn't take that long. I'm poking at it... (This is
> the armv6l target? I built the whole of linux from scratch on that
it was the ever looping mktimeconst and me being to lazy to check if the
build was hanging or just slow.
More information about the Aboriginal