[Aboriginal] What's musl, anyway?

John Spencer maillist-aboriginal at barfooze.de
Thu Oct 6 16:20:31 PDT 2011

On 10/07/2011 12:01 AM, Rob Landley wrote:
> On 10/05/2011 05:52 PM, maillist-aboriginal at barfooze.de wrote:
>> let me answer the question in the subject first:
>> musl is a new libc for linux, which imo is exceptionally well made.
>> according to my own experience its smaller, faster and more correct than
>> uclibc.
> Huh, are you dalias under yet another name,
no. but dalias is indeed the author of musl libc.
btw, he used aboriginal to make the arm port.

> or is this _another_ libc?
it is another libc if compared to the ones mentioned below.
not sure what you're refering to here.

> This is different from the attempts to port bionic to the vanilla
> kernel, has no relation to klibc, is not related to uClibc, dietlibc,
> glibc, libc5, or any of the BSD libc projects?
yep, different attempt. and so far it's looking very sweet. couldn't do 
it better myself.
completeness status is reflected in the version number which is 0.8.xx
there's some C99 math stuff missing and no C++ support yet, but apart 
from that nearly everything compiles now
(after fixing broken/idiotic build systems and gnu assumptions, of course)
> (What definitions of "correct" mean something other than "runs the
> software that's out there", by the way?)
one example for correctness is that musl doesnt implement a 32bit off_t 
api, because if you allow both you can have
a 32-bit off_t program invoked by a 64-bit off_t shell...
its stdin/out/err will be in 64-bit mode...

>> here is a comparison: http://www.etalabs.net/compare_libcs.html
>> (it even has stuff which uclibc lacks, such as posix_spawn, which is a
>> really handy function)
>> musl has ARM support since the last release and i'm currently porting
>> sabotage linux
>> to build on that arch, using aboriginal as my build platform.
>> there are a couple of issues (mostly the old binutils used by
>> aboriginal, which don't
>> go well together with musl's gcc wrapper, the lack of a debugger
>> and the speed and memory restrictions of qemu), however i'm making
>> progress...
> Most of those are on my todo list to fix.  the memory restrictions
> should be easier to lift once device tree support properly goes into
> both kernel and qemu (bits are there now, maybe enough already, I have
> to poke at it).  I've got a todo list item to add a -Bsymbolic-thingy
> flag but haven't gotten time to work on that yet.
actually it would be sweet to have ld.gold around, but that'd require a 
working C++ env
(at least to build it, statically preferred ;))
i was told that it is faster and takes way less ram than gnu ld.
(which is not very surprising, given the quality of average gnu code)
> Have you tried building gdb natively under uClibc (statically linked)
> and then using that binary under a musl chroot?
>>>> also in my tests stuff was partially 50% faster than with -O3 alone.
>>>> so it's actually a pretty neat feature.
>>> Sounds like you're compiling badly written code, but you work with what
>>> you've got...
>> not really, i did test a couple of different selfwritten arraylist
>> implementations.
>> using the -flto -fwhole-program flags, i could see big speed and size
>> differences,
>> when all involved c-files where thrown at once onto the compiler,
>> as opposed to just linking object files together.
> *shrug*  Ok.
>>>> i was talking about gcc *3*.
>>> Ah, I missed that.
>>> There are arguments for supporting older toolchain versions, and
>>> arguments for supporting newer toolchain versions.  Mostly I just want a
>>> working toolchain to bootstrap a target, and then you can natively build
>>> a new toolchain under that in things like lfs-bootstrap.hdc.
>> i actually think a handful of patches on the kernel source would suffice
>> to still build a complete linux system using gcc 3.
>> given the immense memory requirements when compiling gcc4.5+it could
>> even be possible that gcc 3 is the only vital option.
> Except there are lots of target output types that doesn't support.
>>> The interesting thing that newer gcc versions give you is support for
>>> more targets.  For example, armv7 showed up in gcc 4.3, which is the big
>>> incentive to support the newer one.  Support for the xylinx microblaze
>>> would also be nice, since qemu has that now.  Alpha and m68k compilers
>>> that don't die so often with internal compiler errors while cross
>>> compiling stuff would also be cool, although I the native versions of
>>> those compilers might be more stable.
>> indeed, the more targets, the better. however i've seen in gcc's
>> changelog that
>> in the last releases some old architectures have been removed...
> Did I mention I think the FSF is really really bad at software engineering?
full ack. what sucks most tho are the build systems used by gnu.
i guess to fully grok gcc's build system you have to become Phd at the 
gnu university.

>>>> the build time on my 3ghz machine is 5 minutes compared to 45 minutes
>>>> for gcc4.5 (without mpc/mpfr/gmp, who consume another 5 minutes alone)
>>>> only thing missing is the gnu99 inline stuff. apart from that it
>>>> compiles 99% of the code out there.
>>> It is indeed cool.  But if I recall it couldn't do simple dead code
>>> elimination on arm, meaning busybox had a build break trying to link
>>> code out of .c files it hadn't bothered to compile because it knew they
>>> wouldn't be used.
>> yep, that's the tradeoff you have to make... features vs bloat :/
> With FSF code, maybe.  In general, you can do efficient implementations
> and get a good bang for the byte.  (THEY can't, but see "bad at
> software", above.)
>>>> never heard about tcg. i'll read up on that one.
>>> There's a README in qemu's tcg subdirectory.  See also
>> thanks, that looks pretty interesting.
>> the level of optimization done is tiny, though (no idea how much it was
>> improved since then).
> But the time it takes to compile code is also tiny.  (This code
> generator is on par with if not faster than the one in tcc.)
sweet. i imagine a compiler which could crosscompile to any wished 
target by just adding a flag.
>>>> i can load the file into vi, both using the symlink and the link target.
>>>> musl's readdir is just a one liner around the kernel syscall.
>>> Use strace to see what arguments it's passing to the syscall.
>> yep, using strace i could hunt down the bug.
>> i was previously inserting printf's into musl code and recompiling
>> everytime due to a lacking debugger...
>> not really used to that kind of debugging. getting a working statically
>> compiled gdb for ARM is on the
>> top of my TODO list.
> Have you tried building it natively under either dev-environment.sh's
> /home or under the filesystem lfs-bootstrap.hdc produces?
not yet, will try later today. in fact i ported gdb to build on musl on 
intel targets and it was
a fscking patch orgy. maybe that experience kept me from trying to build 
it with uclibc yet.
the number of glibc-specific assumptions gdb makes is huge, not to talk 
about *gnulib*,
which is broken by design, and of course is "built-in" into gdb.
>> musl had fcntl.h with values from i386, which slightly differ from ARM, so
>> the O_LARGEFILE of musl's open() was interpreted as O_NOFOLLOW from the
>> kernel.
>> was fixed yesterday by
>> http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=commitdiff;h=e6d765a8b1278e9e5f507638ccdec9fe40e52364
> Yeah that'll do it.
>> (that's also a nice thing about musl, i report the bug and 10 minutes
>> later it is fixed, usually)
> Cool.  I try to fix issues the same week they're reported (clearing my
> backlog on weekends), but don't always succeed.  (I have a day job and
> working on this isn't it.)
(heh, you still didn't fix the busybox "patch program" bug with path 
levels i reported on the busybox mailling list.
thats why sabotage uses gnu patch now ;))
> Rob


More information about the Aboriginal mailing list