<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-western">let me answer the question in
the subject first:
<br>
musl is a new libc for linux, which imo is exceptionally well
made.
<br>
according to my own experience its smaller, faster and more
correct than uclibc.
<br>
here is a comparison: <a class="moz-txt-link-freetext"
href="http://www.etalabs.net/compare_libcs.html">http://www.etalabs.net/compare_libcs.html</a>
<br>
(it even has stuff which uclibc lacks, such as posix_spawn, which
is a really handy function)
<br>
<br>
musl has ARM support since the last release and i'm currently
porting sabotage linux
<br>
to build on that arch, using aboriginal as my build platform.
<br>
there are a couple of issues (mostly the old binutils used by
aboriginal, which don't
<br>
go well together with musl's gcc wrapper, the lack of a debugger
<br>
and the speed and memory restrictions of qemu), however i'm making
progress...
<br>
<br>
On 10/04/2011 10:32 PM, Rob Landley wrote:
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">i built
both gcc-core-4.5.3 and 4.6.0 on sabotage linux which only
has a
<br>
C compiler (since musl doesnt support C++ yet)
<br>
the link time optimization newer gcc's (4.5+) support is
quite nice as
<br>
it allows to strip off unneeded functions without putting
each function
<br>
into an own object file.
<br>
</blockquote>
So you mean it's like the --function-sections and
--gc-sections options
<br>
I've been using since 2005?
<br>
<br>
</blockquote>
it's not the same, it seems lto can also abandon code inside
functions
<br>
which is never invoked.
<br>
</blockquote>
Ok. Sounds like fun: better optimizer.
<br>
</blockquote>
<br>
yeah, it's pretty neat, once you are past the build stage...
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">fact is,
the binaries are much smaller as with the dead code
elimination
<br>
flags.
<br>
also in my tests stuff was partially 50% faster than with -O3
alone.
<br>
so it's actually a pretty neat feature.
<br>
</blockquote>
Sounds like you're compiling badly written code, but you work
with what
<br>
you've got...
<br>
<br>
</blockquote>
<br>
not really, i did test a couple of different selfwritten arraylist
implementations.
<br>
using the -flto -fwhole-program flags, i could see big speed and
size differences,
<br>
when all involved c-files where thrown at once onto the compiler,
<br>
as opposed to just linking object files together.
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">gcc
3.4.6 builds fine with 128 mb and no swap at all...
<br>
also it is relatively slim (i have a statically built one
here which
<br>
fits into a 6MB tarball...)
<br>
maybe it would be the best if someone would fork it and
add the new
<br>
inline stuff...
<br>
that way it could still be used to build recent kernels.
<br>
</blockquote>
I built linux 3.0 with gcc 4.2.1 and binutils 2.17 on a
dozen
<br>
architectures, worked for metime. What are you referring
to?
<br>
<br>
</blockquote>
i was talking about gcc *3*.
<br>
</blockquote>
Ah, I missed that.
<br>
<br>
There are arguments for supporting older toolchain versions, and
<br>
arguments for supporting newer toolchain versions. Mostly I
just want a
<br>
working toolchain to bootstrap a target, and then you can
natively build
<br>
a new toolchain under that in things like lfs-bootstrap.hdc.
<br>
<br>
</blockquote>
<br>
i actually think a handful of patches on the kernel source would
suffice
<br>
to still build a complete linux system using gcc 3.
<br>
given the immense memory requirements when compiling gcc4.5+it
could
<br>
even be possible that gcc 3 is the only vital option.
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">gcc 3.4.6
is a relatively nice compiler, builds with less than 128MB
<br>
RAM, a statical linked crosscompiler fits into a 6MB .xz file,
<br>
it's faster than gcc4, and has relatively good optimization,
compared
<br>
to pcc or tcc.
<br>
</blockquote>
Compared to tcc Turbo C for DOS had relatively good
optimization.
<br>
<br>
The interesting thing that newer gcc versions give you is
support for
<br>
more targets. For example, armv7 showed up in gcc 4.3, which is
the big
<br>
incentive to support the newer one. Support for the xylinx
microblaze
<br>
would also be nice, since qemu has that now. Alpha and m68k
compilers
<br>
that don't die so often with internal compiler errors while
cross
<br>
compiling stuff would also be cool, although I the native
versions of
<br>
those compilers might be more stable.
<br>
<br>
</blockquote>
<br>
indeed, the more targets, the better. however i've seen in gcc's
changelog that
<br>
in the last releases some old architectures have been removed...
<br>
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">the build
time on my 3ghz machine is 5 minutes compared to 45 minutes
<br>
for gcc4.5 (without mpc/mpfr/gmp, who consume another 5
minutes alone)
<br>
only thing missing is the gnu99 inline stuff. apart from that
it
<br>
compiles 99% of the code out there.
<br>
</blockquote>
It is indeed cool. But if I recall it couldn't do simple dead
code
<br>
elimination on arm, meaning busybox had a build break trying to
link
<br>
code out of .c files it hadn't bothered to compile because it
knew they
<br>
wouldn't be used.
<br>
<br>
</blockquote>
<br>
yep, that's the tradeoff you have to make... features vs bloat :/
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">all
other packages in sabotage linux build just fine with it.
<br>
since pcc is full of bugs and has nearly no optimization
at all its not
<br>
gonna be a real option anytime soon...
<br>
and clang is in C++ itself...
<br>
</blockquote>
Yup. There are some people gluing sparse to llvm, but
again: llvm is
<br>
c++. I want to glue sparse or tcc to qemu's tcg, but it's
down my todo
<br>
list a lot...
<br>
<br>
</blockquote>
never heard about tcg. i'll read up on that one.
<br>
</blockquote>
There's a README in qemu's tcg subdirectory. See also
<br>
<a class="moz-txt-link-freetext"
href="http://127.0.0.1/qemu/2008-01-29.html#Feb_1,_2008_-_TCG">http://127.0.0.1/qemu/2008-01-29.html#Feb_1,_2008_-_TCG</a>
<br>
<br>
</blockquote>
<br>
thanks, that looks pretty interesting.
<br>
the level of optimization done is tiny, though (no idea how much
it was improved since then).
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">on
a sidenote, i really missed having a gdb around...
wonder if its
<br>
possible to supply a binary in the future ?
<br>
</blockquote>
That's been on my todo list for a while, just hasn't
been near the top.
<br>
6.6 was the last GPLv2 release, I can look into
adding that to the
<br>
cross compiler and the gdbserver binary to the static
target binaries
<br>
list...
<br>
<br>
</blockquote>
i guess a separate download such as strace would be even
better.
<br>
</blockquote>
I'm working on it, but it's also a bit down my todo list...
<br>
<br>
Rob
<br>
<br>
</blockquote>
i currently have an issue here with aboriginal:
<br>
a) fdisk -l says both (root and home) partitions don't have a
valid
<br>
partition table. i wonder why?
<br>
</blockquote>
Because they don't. I created filesystem images and attached
them to
<br>
qemu virtual disks:
<br>
<br>
/dev/hda - squashfs root filesystem (mounted on /)
<br>
/dev/hdb - 2 gig writeable ext3 (mounted on /home by
dev-environment.sh)
<br>
/dev/hdc - build control image (mounted on /mnt by
native-build.sh)
<br>
<br>
I'm mounting /dev/hda not /dev/hda1. The whole unpartitioned
device has
<br>
its own block device, which can have a filesystem on it. (You
can do
<br>
this with real hardware too. Floppies were never partitioned.
I have
<br>
no idea why removable USB drives tend to be partitioned, I think
it's
<br>
windows brain damage.)
<br>
<br>
Once upon a time I did create partitioned images:
<br>
<br>
<a class="moz-txt-link-freetext"
href="http://landley.net/code/mkhda.sh">http://landley.net/code/mkhda.sh</a>
<br>
<br>
But it's extra work for no benefit, and it means you can't
easily
<br>
loopback mount them from the host.
<br>
<br>
</blockquote>
<br>
ah, good to know. thanks for clarification.
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">b) after
unpacking and configuring gmp-5.0.2, i have a symlink
<br>
"gmp-5.0.2/mpn/add_n.asm -> ../mpn/arm/add_n.asm"
<br>
the symlink target is a regular file, but the readlink syscall
returns
<br>
ELOOP in errno.
<br>
</blockquote>
If the readlink syscall was broken then ls -l wouldn't be able
to
<br>
display symlinks. What code is calling the readlink() syscall
and
<br>
getting confused? Did you run it under strace? (The
static-build.hdc
<br>
control image builds that, I put binaries up at
<br>
http;//landley.net/aboriginal/downloads/binaries/extras you can
just
<br>
wget, chmod +x, and use if it helps. I can't link you to a
specific one
<br>
because I don't remember which target you're building for.)
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">that
prevents GMP (prerequisite for gcc 4.5) from building.
<br>
</blockquote>
The lfs-bootstrap.hdc control image builds the gmp from Linux
From
<br>
Scratch 6.7 under 11 different targets. That's version 5.0.1 so
<br>
possibly something changed between that and 5.0.2, but I don't
<br>
understand how you're having a system call failure? (How do you
know
<br>
it's a system call failure? There's context you're not
explaining...)
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">i can load
the file into vi, both using the symlink and the link target.
<br>
musl's readdir is just a one liner around the kernel syscall.
<br>
</blockquote>
Use strace to see what arguments it's passing to the syscall.
<br>
<br>
</blockquote>
<br>
yep, using strace i could hunt down the bug.
<br>
i was previously inserting printf's into musl code and recompiling
everytime due to a lacking debugger...
<br>
not really used to that kind of debugging. getting a working
statically compiled gdb for ARM is on the
<br>
top of my TODO list.
<br>
<br>
musl had fcntl.h with values from i386, which slightly differ from
ARM, so
<br>
the O_LARGEFILE of musl's open() was interpreted as O_NOFOLLOW
from the kernel.
<br>
was fixed yesterday by <a class="moz-txt-link-freetext"
href="http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=commitdiff;h=e6d765a8b1278e9e5f507638ccdec9fe40e52364">http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=commitdiff;h=e6d765a8b1278e9e5f507638ccdec9fe40e52364</a>
<br>
<br>
(that's also a nice thing about musl, i report the bug and 10
minutes later it is fixed, usually)
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">
<blockquote type="cite" style="color: rgb(0, 0, 0);">i couldnt
reproduce that behaviour with a manually created symlink
<br>
according to the above scheme.
<br>
but it is reproducible by untaring gmp again and restarting
the build.
<br>
i suspect that's either a filesystem or kernel bug.
<br>
</blockquote>
So the symlink is created corrupted?
<br>
<br>
What version are you using? (The 1.1 release is using the ext4
driver
<br>
for both ext3 and ext2, and if you're untarring into /home under
<br>
dev-environment.sh then it's using the /dev/hdb image which
should be ext3.)
<br>
<br>
The previous (1.0.3) release was using the separate ext2 and
ext3
<br>
drivers for the journaled and nonjournaled versions of the same
<br>
filesystem, which was silly. I'm not using ext4 yet, but one
unified
<br>
driver for both of those is cool. Shame if it's buggy, but we
can get
<br>
it fixed if so...
<br>
<br>
<blockquote type="cite" style="color: rgb(0, 0, 0);">any
suggestions are welcome <span class="moz-smiley-s3"
title=";)"><span>;)</span></span>
<br>
</blockquote>
More info, please.
<br>
<br>
Rob
<br>
<br>
</blockquote>
<br>
</div>
</body>
</html>