rob at landley.net
Tue Dec 6 06:25:11 PST 2011
On 12/02/2011 08:24 PM, James McMechan wrote:
>>> On a side note, I noticed busybox defconfig even has a webserver so
>>> you could add your own private package mirror to dev-environment.sh
>> I could. But I prefer to leave that sort of thing to other people: I'm
>> intentionally agnostic about what you _do_ with the resulting system.
>> The hdc infrastructure lets you make self contained build images, but my
>> examples include one that just runs the busybox test suite.
> Err I was thinking of having a local package mirror that could be setup
> automatically when running dev-enviornment.sh
> after all when you have just built this version you know where a valid
> set of packages are. also it is somewhat humorous that the minimal busybox
> includes a webserver...
The "packages" directory is just enough to re-run the aboriginal linux
build, which isn't actually a very interesting thing to do under the target.
Yeah, it's great that busybox has a webserver, but what do you want to
export? That's the problem: dev-environment.sh has no idea what you
want to DO on the target, it's intentionally agnostic. And if we're
going to set up a server, it might as well have upload _and_ download
capability, hence the ftpd with ftpsend and such that the existing hdc
images are using.
>>> mdev from busybox takes care of the rest of the device files later.
>>> The new DEVTMPFS in the kernel might make even that unneeded
>>> I have not tested without /dev/console yet though...
>> Doesn't devtmpfs give you /dev/console?
> I have not yet tried a devtmpfs only dev it should work but does not
> automount in initramfs if I am reading the documentation correctly
> some early parts of userspace used to break without /dev/console
The existing stuff has been using it for a while, and it's working for
me. I believe I mount it from the init script and provide a
/dev/console in the existing root filesystem, though.
>>>>> #needed for od -t option in busybox build
>>>> I've argued with Denys about that. CONFIG_DESKTOP is not well defined,
>>>> there's no clear rule for what it does and doesn't do.
>>>> This config is enough to build aboriginal linux itself, with this in
>>> It was working earlier, last tested about 1.0.2, I need to check with the new
>>>> A) Does that include running the ./download.sh stage?
>>> download.sh does not use the host tools path and breaks at the moment
>> It will if you run host-tools.sh, zap packages, and then re-run
> you sure it looked like it overrode the path back to OLDPATH?
Only for do_manifest, which is an optional step. (Which can call "git"
and "hg" to get version numbers.)
Let's see, by manual inspection do_manifest can call: hg, sed, cat,
echo, zcat, bzcat, git, svn.
But fundamentally it isn't interesting, it's a documentation step that
can happily fail if you haven't got the appropriate source control
plumbing installed to query version numbers of development packages.
(Why/how you'd be using development snapshots without the appropriate
stuff installed, I have no idea, but it shouldn't barf on it.)
>> Busybox always gets unhappy about those when perl isn't installed, but
>> it's not a fatal error.
>>> leaves a statically linked busybox binary in host-tools-i686
> Yep, imore/test.sh will not accept a arch of host so I was using i686
> and it puts stuff in $STAGE_NAME-$ARCH
Ah. Ok. Fixed: say "host" and it'll skip the load_target.
>> Um, host-tools.sh is target agnostic, it imports sources/include.sh but
>> never calls load_target. I'm confused?
> I was trying out different mini-configs for busybox in a manual fashion
> Um-- I think of host as the first target the one built with the host toolchain
> kernel headers and host C library.
Host isn't really a target. It doesn't work like the other targets, and
the design assumptions are different.
>>> It would be somewhat easier if host acted like a real target.
>> It's sort of the point of the host stuff is to be target agnostic. You
>> should be able to build all the targets under the same host environment.
>> (The fact that different releases of Ubuntu and Fedora and such don't
>> _provide_ the same host environment is why host-tools.sh exists, but the
>> point stands.)
> Err, building the aboriginal host tool set was the target I was referring to.
So you want me to build a toolchain for the host before building
packages for the host?
How? With what? (The cross-compiler.sh step has to cope with this
anyway to get a properly portable toolchain, which is why you have to
specify the CROSS_COMPILER_HOST for that. But that
simple-cross-compiler is built with the actual host tool chain. Can't
escape it at some level, and adding more layers of indirection makes
things _less_ flexible if you do it wrong.)
>>> The current special case logic seem to get in my way as much as help
>>> when trying repeated host-tools setups.
>> Which special case logic?
> whether busybox is linked with the host libc or uClibc from aboriginal
> for example.
# Set this to a comma separated list of packages to build statically,
# or "none" to build all packages dynamically. Set to "all" to build
# all packages statically (and not install static libraries on the
# By default, busybox and the native compiler are built statically.
# (Using a static busybox on the target provides a 20% performance
# boost to autoconf under qemu, and building the native compiler static
# makes it much more portable to other target root filesystems.)
# export BUILD_STATIC=busybox,binutils,gcc-core,gcc-g++,make
The fact that you CAN'T build it statically for the host is due to a bug
in glibc, as described at length here:
The reason the glibc bug never got fixed (last I checked) is that glibc
maintainer Ulrich Drepper went crazy and deprecated static linking entirely:
>> Its "usage:" example suggests setting STAGE_NAME so in this case you
>> probably want:
>> STAGE_NAME=simple-cross-compiler more/test.sh sparc build_section uClibc
>> I should probably put that example in the FAQ... Done.
>>> I quickly commented them out, but it would
>>> seem that having each package saved in the build directory not just
>>> the assembled meta-packages would be nice.
>> Um, do you mean like the config option BINARY_PACKAGE_TARBALLS?
> Looks like it, does it remember the tarballs and not build them again when present?
> I will have to check this option.
No, that's handled at another level.
The "build.sh" script checks for the stage tarballs, and won't rebuild
them if they already exist. (Except that if has to rebuild a
prerequisite stage, it deletes tarballs that would depend on that, so it
rebuilds those dependent stages when it gets to them.)
You can force it to rebuild even if the tarball is there by naming the
stage on your command line. So you can go:
./build.sh i686 native-compiler
And it should rebuild native-compiler, and go on to rebuild
root-filesystem.sh, root-image.sh, and system-image.sh. (But it
shouldn't have to rebuild simple-root-filesystem.sh because that doesn't
depend on native-compiler.sh.)
It'll redo the binary package tarballs whenever it calls build_section:
it rebuilt the binaries, it rebuilds the tarball.
More information about the Aboriginal