[Aboriginal] What's musl, anyway? (was: re: aboriginal)
James McMechan
james_mcmechan at hotmail.com
Wed Oct 5 09:57:33 PDT 2011
> Date: Tue, 4 Oct 2011 15:32:24 -0500
> From: rob at landley.net
> To: aboriginal at lists.landley.net; maillist-aboriginal at barfooze.de
> Subject: [Aboriginal] What's musl, anyway? (was: re: aboriginal)
>
> >>> i built both gcc-core-4.5.3 and 4.6.0 on sabotage linux which only has a
> >>> C compiler (since musl doesnt support C++ yet)
> >>> the link time optimization newer gcc's (4.5+) support is quite nice as
> >>> it allows to strip off unneeded functions without putting each function
> >>> into an own object file.
> >> So you mean it's like the --function-sections and --gc-sections options
> >> I've been using since 2005?
> >>
> > it's not the same, it seems lto can also abandon code inside functions
> > which is never invoked.
>
> Ok. Sounds like fun: better optimizer.
gcc 4.6 is much more aggressive, it will for example automatically inline
functions and then silently eliminate them based on use of local var addresses.
oddly it silently removed the containing function also, and
the structure reference and call to spinlock in the kernel.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48623
> > fact is, the binaries are much smaller as with the dead code elimination
> > flags.
> > also in my tests stuff was partially 50% faster than with -O3 alone.
> > so it's actually a pretty neat feature.
>
> Sounds like you're compiling badly written code, but you work with what
> you've got...
>
> >>> gcc 3.4.6 builds fine with 128 mb and no swap at all...
> >>> also it is relatively slim (i have a statically built one here which
> >>> fits into a 6MB tarball...)
> >>> maybe it would be the best if someone would fork it and add the new
> >>> inline stuff...
> >>> that way it could still be used to build recent kernels.
> >> I built linux 3.0 with gcc 4.2.1 and binutils 2.17 on a dozen
> >> architectures, worked for me. What are you referring to?
> >>
> > i was talking about gcc *3*.
>
> Ah, I missed that.
>
> There are arguments for supporting older toolchain versions, and
> arguments for supporting newer toolchain versions. Mostly I just want a
> working toolchain to bootstrap a target, and then you can natively build
> a new toolchain under that in things like lfs-bootstrap.hdc.
>
> > gcc 3.4.6 is a relatively nice compiler, builds with less than 128MB
> > RAM, a statical linked crosscompiler fits into a 6MB .xz file,
> > it's faster than gcc4, and has relatively good optimization, compared
> > to pcc or tcc.
>
> Compared to tcc Turbo C for DOS had relatively good optimization.
tcc the turbo C from dos or tcc the tiny c compiler which you appear
to be thinking of below (http://bellard.org/tcc).
> The interesting thing that newer gcc versions give you is support for
> more targets. For example, armv7 showed up in gcc 4.3, which is the big
> incentive to support the newer one. Support for the xylinx microblaze
> would also be nice, since qemu has that now. Alpha and m68k compilers
> that don't die so often with internal compiler errors while cross
> compiling stuff would also be cool, although I the native versions of
> those compilers might be more stable.
>
> > the build time on my 3ghz machine is 5 minutes compared to 45 minutes
> > for gcc4.5 (without mpc/mpfr/gmp, who consume another 5 minutes alone)
> > only thing missing is the gnu99 inline stuff. apart from that it
> > compiles 99% of the code out there.
>
> It is indeed cool. But if I recall it couldn't do simple dead code
> elimination on arm, meaning busybox had a build break trying to link
> code out of .c files it hadn't bothered to compile because it knew they
> wouldn't be used.
>
> >>> all other packages in sabotage linux build just fine with it.
> >>> since pcc is full of bugs and has nearly no optimization at all its not
> >>> gonna be a real option anytime soon...
> >>> and clang is in C++ itself...
> >> Yup. There are some people gluing sparse to llvm, but again: llvm is
> >> c++. I want to glue sparse or tcc to qemu's tcg, but it's down my todo
> >> list a lot...
> >>
> > never heard about tcg. i'll read up on that one.
>
> There's a README in qemu's tcg subdirectory. See also
> http://127.0.0.1/qemu/2008-01-29.html#Feb_1,_2008_-_TCG
Err, Rob this appears to be a archive on your machine...
http://landley.net/qemu/2008-01-29.html#Feb_1,_2008_-_TCG works
is tcg better?, tcc also by Bellard has i386,x86_64,arm,c67? architectures
I think, and so has code generators for the most common 3 cases.
or it just the remaining arches like ppc32,ppc64,sparc32,sparc64?,mips32,mips64
sh4 that is the concern. it also lacks gcc -Ml,d option for running the compiler
to find out what the code being compiled depends on, that was one of the things
linux kernel and busybox need gcc for :( that at least would not be too hard
to add but it seems sort of a waste to run the compiler to find out what files to
compile later...
> >>>>> on a sidenote, i really missed having a gdb around... wonder if its
> >>>>> possible to supply a binary in the future ?
> >>>> That's been on my todo list for a while, just hasn't been near the top.
> >>>> 6.6 was the last GPLv2 release, I can look into adding that to the
> >>>> cross compiler and the gdbserver binary to the static target binaries
> >>>> list...
> >>>>
> >>> i guess a separate download such as strace would be even better.
> >> I'm working on it, but it's also a bit down my todo list...
> >>
> >> Rob
> >>
> >
> > i currently have an issue here with aboriginal:
> > a) fdisk -l says both (root and home) partitions don't have a valid
> > partition table. i wonder why?
>
> Because they don't. I created filesystem images and attached them to
> qemu virtual disks:
>
> /dev/hda - squashfs root filesystem (mounted on /)
> /dev/hdb - 2 gig writeable ext3 (mounted on /home by dev-environment.sh)
> /dev/hdc - build control image (mounted on /mnt by native-build.sh)
>
> I'm mounting /dev/hda not /dev/hda1. The whole unpartitioned device has
> its own block device, which can have a filesystem on it. (You can do
> this with real hardware too. Floppies were never partitioned. I have
> no idea why removable USB drives tend to be partitioned, I think it's
> windows brain damage.)
>
> Once upon a time I did create partitioned images:
>
> http://landley.net/code/mkhda.sh
>
> But it's extra work for no benefit, and it means you can't easily
> loopback mount them from the host.
>
> > b) after unpacking and configuring gmp-5.0.2, i have a symlink
> > "gmp-5.0.2/mpn/add_n.asm -> ../mpn/arm/add_n.asm"
> > the symlink target is a regular file, but the readlink syscall returns
> > ELOOP in errno.
This symlink is not present before configure, the style of ../mpn/arm
is slightly odd. I would expect ./arm/add_n.asm to be simpler,
even the x86_64 version mpn/add_n.asm -> ../mpn/x86_64/aors_n.asm
is strange, why a symlink to a arch file of a different name?...
The going up and then down the mpn directory may be hitting some
oddness, though local ref symlinks usually work just fine.
The ELOOP indicating too many symlinks in path usually only occurs
when a symlink loop is found...
Was it being built as part of the Aboriginal scripts or was it being
built by hand?
It seems to be part of a arm build since it is symlinked there.
> If the readlink syscall was broken then ls -l wouldn't be able to
> display symlinks. What code is calling the readlink() syscall and
> getting confused? Did you run it under strace? (The static-build.hdc
> control image builds that, I put binaries up at
> http;//landley.net/aboriginal/downloads/binaries/extras you can just
> wget, chmod +x, and use if it helps. I can't link you to a specific one
> because I don't remember which target you're building for.)
>
> > that prevents GMP (prerequisite for gcc 4.5) from building.
>
> The lfs-bootstrap.hdc control image builds the gmp from Linux From
> Scratch 6.7 under 11 different targets. That's version 5.0.1 so
> possibly something changed between that and 5.0.2, but I don't
> understand how you're having a system call failure? (How do you know
> it's a system call failure? There's context you're not explaining...)
>
> > i can load the file into vi, both using the symlink and the link target.
> > musl's readdir is just a one liner around the kernel syscall.
>
> Use strace to see what arguments it's passing to the syscall.
>
> > i couldnt reproduce that behaviour with a manually created symlink
> > according to the above scheme.
> > but it is reproducible by untaring gmp again and restarting the build.
> > i suspect that's either a filesystem or kernel bug.
>
> So the symlink is created corrupted?
>
> What version are you using? (The 1.1 release is using the ext4 driver
> for both ext3 and ext2, and if you're untarring into /home under
> dev-environment.sh then it's using the /dev/hdb image which should be ext3.)
>
> The previous (1.0.3) release was using the separate ext2 and ext3
> drivers for the journaled and nonjournaled versions of the same
> filesystem, which was silly. I'm not using ext4 yet, but one unified
> driver for both of those is cool. Shame if it's buggy, but we can get
> it fixed if so...
>
> > any suggestions are welcome ;)
>
> More info, please.
>
> Rob
> _______________________________________________
> Aboriginal mailing list
> Aboriginal at lists.landley.net
> http://lists.landley.net/listinfo.cgi/aboriginal-landley.net
> Date: Tue, 4 Oct 2011 15:32:24 -0500
> From: rob at landley.net
> To: aboriginal at lists.landley.net; maillist-aboriginal at barfooze.de
> Subject: [Aboriginal] What's musl, anyway? (was: re: aboriginal)
>
> >>> i built both gcc-core-4.5.3 and 4.6.0 on sabotage linux which only has a
> >>> C compiler (since musl doesnt support C++ yet)
> >>> the link time optimization newer gcc's (4.5+) support is quite nice as
> >>> it allows to strip off unneeded functions without putting each function
> >>> into an own object file.
> >> So you mean it's like the --function-sections and --gc-sections options
> >> I've been using since 2005?
> >>
> > it's not the same, it seems lto can also abandon code inside functions
> > which is never invoked.
>
> Ok. Sounds like fun: better optimizer.
>
> > fact is, the binaries are much smaller as with the dead code elimination
> > flags.
> > also in my tests stuff was partially 50% faster than with -O3 alone.
> > so it's actually a pretty neat feature.
>
> Sounds like you're compiling badly written code, but you work with what
> you've got...
>
> >>> gcc 3.4.6 builds fine with 128 mb and no swap at all...
> >>> also it is relatively slim (i have a statically built one here which
> >>> fits into a 6MB tarball...)
> >>> maybe it would be the best if someone would fork it and add the new
> >>> inline stuff...
> >>> that way it could still be used to build recent kernels.
> >> I built linux 3.0 with gcc 4.2.1 and binutils 2.17 on a dozen
> >> architectures, worked for me. What are you referring to?
> >>
> > i was talking about gcc *3*.
>
> Ah, I missed that.
>
> There are arguments for supporting older toolchain versions, and
> arguments for supporting newer toolchain versions. Mostly I just want a
> working toolchain to bootstrap a target, and then you can natively build
> a new toolchain under that in things like lfs-bootstrap.hdc.
>
> > gcc 3.4.6 is a relatively nice compiler, builds with less than 128MB
> > RAM, a statical linked crosscompiler fits into a 6MB .xz file,
> > it's faster than gcc4, and has relatively good optimization, compared
> > to pcc or tcc.
>
> Compared to tcc Turbo C for DOS had relatively good optimization.
>
> The interesting thing that newer gcc versions give you is support for
> more targets. For example, armv7 showed up in gcc 4.3, which is the big
> incentive to support the newer one. Support for the xylinx microblaze
> would also be nice, since qemu has that now. Alpha and m68k compilers
> that don't die so often with internal compiler errors while cross
> compiling stuff would also be cool, although I the native versions of
> those compilers might be more stable.
>
> > the build time on my 3ghz machine is 5 minutes compared to 45 minutes
> > for gcc4.5 (without mpc/mpfr/gmp, who consume another 5 minutes alone)
> > only thing missing is the gnu99 inline stuff. apart from that it
> > compiles 99% of the code out there.
>
> It is indeed cool. But if I recall it couldn't do simple dead code
> elimination on arm, meaning busybox had a build break trying to link
> code out of .c files it hadn't bothered to compile because it knew they
> wouldn't be used.
>
> >>> all other packages in sabotage linux build just fine with it.
> >>> since pcc is full of bugs and has nearly no optimization at all its not
> >>> gonna be a real option anytime soon...
> >>> and clang is in C++ itself...
> >> Yup. There are some people gluing sparse to llvm, but again: llvm is
> >> c++. I want to glue sparse or tcc to qemu's tcg, but it's down my todo
> >> list a lot...
> >>
> > never heard about tcg. i'll read up on that one.
>
> There's a README in qemu's tcg subdirectory. See also
> http://127.0.0.1/qemu/2008-01-29.html#Feb_1,_2008_-_TCG
>
> >>>>> on a sidenote, i really missed having a gdb around... wonder if its
> >>>>> possible to supply a binary in the future ?
> >>>> That's been on my todo list for a while, just hasn't been near the top.
> >>>> 6.6 was the last GPLv2 release, I can look into adding that to the
> >>>> cross compiler and the gdbserver binary to the static target binaries
> >>>> list...
> >>>>
> >>> i guess a separate download such as strace would be even better.
> >> I'm working on it, but it's also a bit down my todo list...
> >>
> >> Rob
> >>
> >
> > i currently have an issue here with aboriginal:
> > a) fdisk -l says both (root and home) partitions don't have a valid
> > partition table. i wonder why?
>
> Because they don't. I created filesystem images and attached them to
> qemu virtual disks:
>
> /dev/hda - squashfs root filesystem (mounted on /)
> /dev/hdb - 2 gig writeable ext3 (mounted on /home by dev-environment.sh)
> /dev/hdc - build control image (mounted on /mnt by native-build.sh)
>
> I'm mounting /dev/hda not /dev/hda1. The whole unpartitioned device has
> its own block device, which can have a filesystem on it. (You can do
> this with real hardware too. Floppies were never partitioned. I have
> no idea why removable USB drives tend to be partitioned, I think it's
> windows brain damage.)
>
> Once upon a time I did create partitioned images:
>
> http://landley.net/code/mkhda.sh
>
> But it's extra work for no benefit, and it means you can't easily
> loopback mount them from the host.
>
> > b) after unpacking and configuring gmp-5.0.2, i have a symlink
> > "gmp-5.0.2/mpn/add_n.asm -> ../mpn/arm/add_n.asm"
> > the symlink target is a regular file, but the readlink syscall returns
> > ELOOP in errno.
>
> If the readlink syscall was broken then ls -l wouldn't be able to
> display symlinks. What code is calling the readlink() syscall and
> getting confused? Did you run it under strace? (The static-build.hdc
> control image builds that, I put binaries up at
> http;//landley.net/aboriginal/downloads/binaries/extras you can just
> wget, chmod +x, and use if it helps. I can't link you to a specific one
> because I don't remember which target you're building for.)
>
> > that prevents GMP (prerequisite for gcc 4.5) from building.
>
> The lfs-bootstrap.hdc control image builds the gmp from Linux From
> Scratch 6.7 under 11 different targets. That's version 5.0.1 so
> possibly something changed between that and 5.0.2, but I don't
> understand how you're having a system call failure? (How do you know
> it's a system call failure? There's context you're not explaining...)
>
> > i can load the file into vi, both using the symlink and the link target.
> > musl's readdir is just a one liner around the kernel syscall.
>
> Use strace to see what arguments it's passing to the syscall.
>
> > i couldnt reproduce that behaviour with a manually created symlink
> > according to the above scheme.
> > but it is reproducible by untaring gmp again and restarting the build.
> > i suspect that's either a filesystem or kernel bug.
>
> So the symlink is created corrupted?
>
> What version are you using? (The 1.1 release is using the ext4 driver
> for both ext3 and ext2, and if you're untarring into /home under
> dev-environment.sh then it's using the /dev/hdb image which should be ext3.)
>
> The previous (1.0.3) release was using the separate ext2 and ext3
> drivers for the journaled and nonjournaled versions of the same
> filesystem, which was silly. I'm not using ext4 yet, but one unified
> driver for both of those is cool. Shame if it's buggy, but we can get
> it fixed if so...
>
> > any suggestions are welcome ;)
>
> More info, please.
>
> Rob
> _______________________________________________
> Aboriginal mailing list
> Aboriginal at lists.landley.net
> http://lists.landley.net/listinfo.cgi/aboriginal-landley.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.landley.net/pipermail/aboriginal-landley.net/attachments/20111005/8a2ace4e/attachment-0003.htm>
More information about the Aboriginal
mailing list