[Aboriginal] What's musl, anyway? (was: re: aboriginal)
John Spencer
maillist-aboriginal at barfooze.de
Wed Oct 5 16:04:02 PDT 2011
let me answer the question in the subject first:
musl is a new libc for linux, which imo is exceptionally well made.
according to my own experience its smaller, faster and more correct than
uclibc.
here is a comparison: http://www.etalabs.net/compare_libcs.html
(it even has stuff which uclibc lacks, such as posix_spawn, which is a
really handy function)
musl has ARM support since the last release and i'm currently porting
sabotage linux
to build on that arch, using aboriginal as my build platform.
there are a couple of issues (mostly the old binutils used by
aboriginal, which don't
go well together with musl's gcc wrapper, the lack of a debugger
and the speed and memory restrictions of qemu), however i'm making
progress...
On 10/04/2011 10:32 PM, Rob Landley wrote:
>>>> i built both gcc-core-4.5.3 and 4.6.0 on sabotage linux which only
>>>> has a
>>>> C compiler (since musl doesnt support C++ yet)
>>>> the link time optimization newer gcc's (4.5+) support is quite nice as
>>>> it allows to strip off unneeded functions without putting each
>>>> function
>>>> into an own object file.
>>> So you mean it's like the --function-sections and --gc-sections options
>>> I've been using since 2005?
>>>
>> it's not the same, it seems lto can also abandon code inside functions
>> which is never invoked.
> Ok. Sounds like fun: better optimizer.
yeah, it's pretty neat, once you are past the build stage...
>> fact is, the binaries are much smaller as with the dead code elimination
>> flags.
>> also in my tests stuff was partially 50% faster than with -O3 alone.
>> so it's actually a pretty neat feature.
> Sounds like you're compiling badly written code, but you work with what
> you've got...
>
not really, i did test a couple of different selfwritten arraylist
implementations.
using the -flto -fwhole-program flags, i could see big speed and size
differences,
when all involved c-files where thrown at once onto the compiler,
as opposed to just linking object files together.
>>>> gcc 3.4.6 builds fine with 128 mb and no swap at all...
>>>> also it is relatively slim (i have a statically built one here which
>>>> fits into a 6MB tarball...)
>>>> maybe it would be the best if someone would fork it and add the new
>>>> inline stuff...
>>>> that way it could still be used to build recent kernels.
>>> I built linux 3.0 with gcc 4.2.1 and binutils 2.17 on a dozen
>>> architectures, worked for metime. What are you referring to?
>>>
>> i was talking about gcc *3*.
> Ah, I missed that.
>
> There are arguments for supporting older toolchain versions, and
> arguments for supporting newer toolchain versions. Mostly I just want a
> working toolchain to bootstrap a target, and then you can natively build
> a new toolchain under that in things like lfs-bootstrap.hdc.
>
i actually think a handful of patches on the kernel source would suffice
to still build a complete linux system using gcc 3.
given the immense memory requirements when compiling gcc4.5+it could
even be possible that gcc 3 is the only vital option.
>> gcc 3.4.6 is a relatively nice compiler, builds with less than 128MB
>> RAM, a statical linked crosscompiler fits into a 6MB .xz file,
>> it's faster than gcc4, and has relatively good optimization, compared
>> to pcc or tcc.
> Compared to tcc Turbo C for DOS had relatively good optimization.
>
> The interesting thing that newer gcc versions give you is support for
> more targets. For example, armv7 showed up in gcc 4.3, which is the big
> incentive to support the newer one. Support for the xylinx microblaze
> would also be nice, since qemu has that now. Alpha and m68k compilers
> that don't die so often with internal compiler errors while cross
> compiling stuff would also be cool, although I the native versions of
> those compilers might be more stable.
>
indeed, the more targets, the better. however i've seen in gcc's
changelog that
in the last releases some old architectures have been removed...
>> the build time on my 3ghz machine is 5 minutes compared to 45 minutes
>> for gcc4.5 (without mpc/mpfr/gmp, who consume another 5 minutes alone)
>> only thing missing is the gnu99 inline stuff. apart from that it
>> compiles 99% of the code out there.
> It is indeed cool. But if I recall it couldn't do simple dead code
> elimination on arm, meaning busybox had a build break trying to link
> code out of .c files it hadn't bothered to compile because it knew they
> wouldn't be used.
>
yep, that's the tradeoff you have to make... features vs bloat :/
>>>> all other packages in sabotage linux build just fine with it.
>>>> since pcc is full of bugs and has nearly no optimization at all its
>>>> not
>>>> gonna be a real option anytime soon...
>>>> and clang is in C++ itself...
>>> Yup. There are some people gluing sparse to llvm, but again: llvm is
>>> c++. I want to glue sparse or tcc to qemu's tcg, but it's down my todo
>>> list a lot...
>>>
>> never heard about tcg. i'll read up on that one.
> There's a README in qemu's tcg subdirectory. See also
> http://127.0.0.1/qemu/2008-01-29.html#Feb_1,_2008_-_TCG
>
thanks, that looks pretty interesting.
the level of optimization done is tiny, though (no idea how much it was
improved since then).
>>>>>> on a sidenote, i really missed having a gdb around... wonder if its
>>>>>> possible to supply a binary in the future ?
>>>>> That's been on my todo list for a while, just hasn't been near the
>>>>> top.
>>>>> 6.6 was the last GPLv2 release, I can look into adding that to
>>>>> the
>>>>> cross compiler and the gdbserver binary to the static target binaries
>>>>> list...
>>>>>
>>>> i guess a separate download such as strace would be even better.
>>> I'm working on it, but it's also a bit down my todo list...
>>>
>>> Rob
>>>
>> i currently have an issue here with aboriginal:
>> a) fdisk -l says both (root and home) partitions don't have a valid
>> partition table. i wonder why?
> Because they don't. I created filesystem images and attached them to
> qemu virtual disks:
>
> /dev/hda - squashfs root filesystem (mounted on /)
> /dev/hdb - 2 gig writeable ext3 (mounted on /home by dev-environment.sh)
> /dev/hdc - build control image (mounted on /mnt by native-build.sh)
>
> I'm mounting /dev/hda not /dev/hda1. The whole unpartitioned device has
> its own block device, which can have a filesystem on it. (You can do
> this with real hardware too. Floppies were never partitioned. I have
> no idea why removable USB drives tend to be partitioned, I think it's
> windows brain damage.)
>
> Once upon a time I did create partitioned images:
>
> http://landley.net/code/mkhda.sh
>
> But it's extra work for no benefit, and it means you can't easily
> loopback mount them from the host.
>
ah, good to know. thanks for clarification.
>> b) after unpacking and configuring gmp-5.0.2, i have a symlink
>> "gmp-5.0.2/mpn/add_n.asm -> ../mpn/arm/add_n.asm"
>> the symlink target is a regular file, but the readlink syscall returns
>> ELOOP in errno.
> If the readlink syscall was broken then ls -l wouldn't be able to
> display symlinks. What code is calling the readlink() syscall and
> getting confused? Did you run it under strace? (The static-build.hdc
> control image builds that, I put binaries up at
> http;//landley.net/aboriginal/downloads/binaries/extras you can just
> wget, chmod +x, and use if it helps. I can't link you to a specific one
> because I don't remember which target you're building for.)
>
>> that prevents GMP (prerequisite for gcc 4.5) from building.
> The lfs-bootstrap.hdc control image builds the gmp from Linux From
> Scratch 6.7 under 11 different targets. That's version 5.0.1 so
> possibly something changed between that and 5.0.2, but I don't
> understand how you're having a system call failure? (How do you know
> it's a system call failure? There's context you're not explaining...)
>
>> i can load the file into vi, both using the symlink and the link target.
>> musl's readdir is just a one liner around the kernel syscall.
> Use strace to see what arguments it's passing to the syscall.
>
yep, using strace i could hunt down the bug.
i was previously inserting printf's into musl code and recompiling
everytime due to a lacking debugger...
not really used to that kind of debugging. getting a working statically
compiled gdb for ARM is on the
top of my TODO list.
musl had fcntl.h with values from i386, which slightly differ from ARM, so
the O_LARGEFILE of musl's open() was interpreted as O_NOFOLLOW from the
kernel.
was fixed yesterday by
http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=commitdiff;h=e6d765a8b1278e9e5f507638ccdec9fe40e52364
(that's also a nice thing about musl, i report the bug and 10 minutes
later it is fixed, usually)
>> i couldnt reproduce that behaviour with a manually created symlink
>> according to the above scheme.
>> but it is reproducible by untaring gmp again and restarting the build.
>> i suspect that's either a filesystem or kernel bug.
> So the symlink is created corrupted?
>
> What version are you using? (The 1.1 release is using the ext4 driver
> for both ext3 and ext2, and if you're untarring into /home under
> dev-environment.sh then it's using the /dev/hdb image which should be
> ext3.)
>
> The previous (1.0.3) release was using the separate ext2 and ext3
> drivers for the journaled and nonjournaled versions of the same
> filesystem, which was silly. I'm not using ext4 yet, but one unified
> driver for both of those is cool. Shame if it's buggy, but we can get
> it fixed if so...
>
>> any suggestions are welcome ;)
> More info, please.
>
> Rob
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.landley.net/pipermail/aboriginal-landley.net/attachments/20111006/9db8f108/attachment-0003.htm>
More information about the Aboriginal
mailing list