[Aboriginal] Couple of bugs.
rlandley at parallels.com
Sat Apr 9 23:11:20 PDT 2011
On 04/09/2011 10:52 AM, David Seikel wrote:
> On Sat, 9 Apr 2011 10:35:56 -0500 Rob Landley <rlandley at parallels.com>
>> On 04/09/2011 10:06 AM, David Seikel wrote:
>>> On Sat, 9 Apr 2011 09:16:56 -0500 Rob Landley
>>> <rlandley at parallels.com> wrote:
>>>> I need to figure out what infrastructure goes into aboriginal linux
>>>> and what goes into the separate control-images. (Automative native
>>>> build control images. Distro bootstrapping. I suck at naming
>>>> things, anybody have a suggestion? They can share the mailing list
>>>> until there's enough traffic ot justify splitting it off, but I
>>>> have to name the mercurial repository in order to put it online...)
>>> Rainbow serpent? "moves through water and rain, shaping landscapes,
>>> naming and singing of places, swallowing and sometimes drowning
>>> people; strengthening the knowledgeable with rainmaking and healing
>>> powers; blighting others with sores, weakness, illness, and death."
>> Never heard of it before.
> Australian aboriginal mythology that is reasonably common across
> Australia, and quite well known here.
The name "aboriginal linux" is like "aboriginal forests", and has
nothing to do with the native people of a country I've never been to,
who have apparently lost so much of their original culture that they now
identify themselves with a latin phrase ("ab origine") rather than
anything that predated settlement of their continent by europeans, and
have allowed themselves to become so strongly identified that any other
use of that latin phrase is assumed to refer to them, and they're
apparently ok with this in a way that navive americans are not when it
comes to the label "indian".
Hell no. (And yes I'm getting a touch sick of the topic.)
>>> The control images could be both examples, and bootstrap steps to
>>> the popular distros you have already mentioned. Stopping short of
>>> being distros or distro builders, and letting the distro maintainers
>>> themselves deal with the next step once we get them bootstrapped.
>>> Though perhaps a full BLFS build is on the cards, since that has a
>>> limit on the packages, is a good shake down, and a good match.
>> Yup. There's good reasons to _do_ it, it's just the scope of the
>> control images heap could grow larger than the rest of aboriginal
>> linux combined. How many distros are there?
> That's why I said "popular distros you have already mentioned". That's
> where you draw the line and say no. LFS, Gentoo, Debian, Red Hat, and
> Ubuntu. Maybe ArchLinux as well. Certainly not Android.
My first exposure to Linux predates Ubuntu by more than a decade, and my
switch to Linux as my exclusive desktop OS predates its first release by
about 7 years. Arch is noticeably more recent. I believe Slackware has
more actual users than Linux From Scratch...
Deciding where to stop, and what to include, is policy. (When users
show up contributing things I wasn't originally interested in, I tend to
take that as a sign there is interest in them.)
>>> Could be both separate, and being able to use Aboriginal Linux as
>>> the build environment by having the two side by side, so people can
>>> run both with simple scripts, then use the results to build some
>>> sort of bootable image, or keep running it under qemu.
>>> Could also have the option of combining it into one image, and not
>>> two where one is mounted automatically, then have to manually
>>> chroot into it to actually do anything with the bootstrap under
>> The control images are target independent, the system images are
>> target specific. (And the control images can be pretty darn big, the
>> LFS one is 350 megs.) It makes sense to me to keep them separate.
> Eventually a lot of users of the system will want to build bootable
> images, plus for the likes of LFS, include the toolchain that has been
> used so far.
Yes, that's why it's important to allow them to be used together easily.
I don't see the reasoning for bundling together a target-dependent
automated build that only produces one thing. Why not just ship the
output instead? (Which is uploaded tarball you pointed out was misnamed.)
I can see shipping the source, and I can see shipping the end result,
and shipping flexible intermediate stages (such as system images that
can build anything for a target, and control images that can be built
for many different targets). But why ship an inflexible intermediate
stage that only produces a single result, instead of shipping that result?
>> If you use the run-emulator.sh, it runs the build automatically, but
>> there's a 3 second delay where it prompts you to press a key to get a
>> shell prompt, and only starts the build if you don't.
>> I can easily change how that works, but it's really a user interface
>> question. (Aesthetic issue, not necessarily a one true answer there.)
> Does it do the chroot step into the bootstrap image when you press
> that key?
Currently you have to go:
do_in_chroot /home/lfs /mnt/run-build-stages.sh
Next release (which is horribly overdue and I think I need to back out
the uClibc-NPTL upgrade and put that until the following release just to
get it unblocked), I'm breaking out do_in_chroot to its own script and
putting it in the system image, at which point you should be able to
do-in-chroot /home/lfs /mnt/run-build-stages.sh
Also, it should autodetect if /home/lfs is already there, and if so just
--bind mount the host's mounts (/proc /sys /dev and so on) into the
target and do the chroot without re-copying everything. That requires
some surgery on the script to implement, though...
>>> I'd probably dump a EFL on frame buffer, and EFL on X minimal
>>> bootstrap into the EFL repo, as an example to others to not start
>>> dumping random collections of bootstraps into your repos. They
>>> already have an OpenEmbedded area in their SVN.
>> I do want to set up a vnc desktop export. In theory x.org does this
>> built-in, in practice I've misplaced the instructions I got it to work
>> with last time. (Maybe it was in BLFS somewhere...)
> Or don't pass -nographic to qemu, which I already have a modified
> run-emulator.sh doing. Something I'll need to play with more for this
The target's settings file creates the run-emulator.sh script, you can
put anything you want in there. (Even use an emulator other than qemu.)
However, what I meant was having the image export VNC rather than having
qemu emulate a VNC-based video card. (Part of the reason is I'm
fiddling with LXC containers in my day job, and I want to be able to
export a VNC desktop from within a container and run it in a window on a
host. Kir Kolyshkin (OpenVZ maintainer and a co-worker of mine) made
this work at Scale, and I had it working on the Xylinx microblaze last
year, I think the instructions are buried down in BLFS somewhere. My
recent attempts to google for it have just pulled up external VNC
projects when x.org has built-in support for it these days. (It's on
the todo list. The fact my OLS paper proposal "Why containers are
awesome" has been accepted means I need to start seriously researching
> Think I have come up with an acceptable way for me to create a multi
> partition bootable image suitable for DDing onto a CF card, without
> having to use root on the host.
I used to have scripts that did this... (Rummage, rummage... Oh, duh.)
See also attached control-image script which I whipped up for somebody
on the old list a year or so back. (Alas, it involved lilo so was
x86-specific, which is why I never followed up on it or merged it. You
could almost make a universal bootloader out of u-boot, but since it
went GPLv3 it doesn't count anymore. Since QEMU _has_ a built in
universal bootloader with the -kernel option, I declared victory and
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 2518 bytes
Desc: not available
More information about the Aboriginal