[Aboriginal] Aboriginal Linux with a recent toolchain

Rob Landley rob at landley.net
Sat Dec 13 15:03:08 PST 2014


On 12/12/2014 04:57 AM, Alessio Igor Bogani wrote:
> Hi everyone,
> 
> Have someone instructed Aboriginal Linux to download, build and use a
> more recent toolchain?

People keep asking me about this off-list. A quick search of my
"sent" folder finds:

On 12/05/14 12:00, Jazzoo Watchman wrote:
> I would like to add/extend aboriginal support for this device :
>
> cat /proc/cpuinfo
> Processor    : ARMv7 Processor rev 3 (v7l)
> processor    : 0
> BogoMIPS    : 38.40
>
> processor    : 1
> BogoMIPS    : 38.40
>
> Features    : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4
> idiva idivt
> CPU implementer    : 0x41
> CPU architecture: 7
> CPU variant    : 0x0
> CPU part    : 0xc07
> CPU revision    : 3
>
> Hardware    : Qualcomm MSM 8610 (Flattened Device Tree)
> Revision    : 0006
> Serial        : 0000000000000000
>
> Would you provide me some pointers in the direction I should take?

The problem is I'm using the last GPLv2 releases of gcc and binutils
(versions 4.2.1 and 2.17 respectively), and those were right before
armv7 support went into the repository.

There are three approaches:

1) Cherry-pick ARMv7 support out of the git repositories from before the
license changed to GPLv3. I've done this for binutils already (grabbed
the last GPLv2 commit), but haven't done so for gcc yet because the
support packages (mpfr and such) are in separate repositories and it's
kind of fiddly to re-integrate them into a buildable package. (A few
releases later they were split up into completely spearate packages, but
in 4.2 they're still integrated.)

2) Build an external armv7l toolchain from GPLv3 source using a project
like buildroot, or downloading the Code Sourcery versions. They problem
is that we don't just need cross compilers, we need _native_ compilers,
and nobody seems to produce native compiler binaries.

3) Switch to a different toolchain like pcc or llvm. I'm following
http://ellcc.org reasonably closely, but it doesn't do what I need yet
and the build dependencies are rather unfortunate. (You can't get a
self-bootstrapping system with just 7 packages there, the toolchain
_alone_ is more than 7 packages.)

For armv8 and newer processors like Qualcomm's Hexagon, #1 isn't
necessarily an option. I'm leaning towards #3, but it's a a very large
change. For your purposes, #2 might be easiest.

I'm happy to discuss it more, but that's the basic issue.

Rob

On 12/03/14 13:44, XXXX at student.agh.edu.pl wrote:
> Hi, Can you give me some information I need about the aboriginal linux?

Did you read http://landley.net/aboriginal/about.html and the FAQ.html page?

> I'm doing a research on my university and really need it. So the
> questions are:
> - How much RAM does the system need per thread/fifo/semaphore?

I'm not really sure what question you're asking here...

The QEMU images configure themselves with 256 megabytes of memory to
provide a development environment, that's about how much gcc needs at a
time to compile the largest chunks of the packages in Linux From Scratch.

I vaguely recall that if Linux forks a new thread and the new thread
immediately waits(), it dirties three pages. The task structure is at
the start of the process stack (ell, task struct, environment space,
then stack), so it'll dirty a stack page, need a new task struct, and
the third might be a page table entry for forking a new VM. Everything
else is copy on write, and the executable pages are mmaped() out of the
executable (they get dirtied if you do dynamic linking without PIC to
group the relocations, but you can avoid that with static linking).

I haven't looked at the memory usage of individual FIFOs and sempahores.
(Do you mean semaphores in kernel, or do you mean futexes in userspace?)

> - How many timer does it have?

QEMU provides a programmable interval timer the emulated Linux kernel
uses to drive the scheduler. (Linux keeps defaulting to the emulated CPU
cycle counter and then falling back to the external timer because
emulated cycle counters aren't constant.)

If you boot the system on real hardware, presumaby the kernel uses
whatever timers you provide it?

> - Does it support energy saving?

Aboriginal's default deployment is to QEMU, an emulator. It allows you
to use it on real hardware, but generally expects you to provide your
own Linux kernel .config in that case.

> - How does it deal with multitasking?

It's Linux. It just works.

If you mean how does it deal with SMP, a QEMU deployment only really
does single processor well because dealing with atomicity guarantees
between emulated instructions turns out to be really hard (the dynamic
translation technique it uses emulates at the page level rather than the
instruction level, and the atomicity guarantees are at the instruction
level). People have talked about making qemu multithreaded for years,
and they've used that to offload various I/O tasks, but it turns out to
be kind of hard to actualy use multiple threads to implement multiple
processors that will interact correctly.

On real hardware, it uses the real hardware's SMP.

Aboriginal Linux _does_ have a trick to speed up emulated native builds:
it installs distcc and runs distccd on the host hooked up to the cross
compiler, and then sends compile jobs to the host through the virtual
network. The way distcc works is it runs each .c file through the
preprocessor locally, sends the resulting self-contained .c file through
the network to distccd, gets back a .o file, and then links the .o files
together when it's done. This means header and library search paths are
local to the emulator, so there's no host/target confusion (there's just
one context inside the emulator, so you can't get the contexts confused
and leaking between each other).

Using distcc can take advantage of the host's SMP support to execute
compile jobs in parallel. (In that case, the ./configure stage of
package builds becomes the main bottleneck.)

> - What licence it is on? (As linux should be GPL?)

The build scripts are BSD licensed, the individual packages are GPLv2.
We use the last GPLv2 releases of binutils and gcc (2.17 and 4.2.1
respectively, both a bit old now) to avoid needing to ship GPLv3
components. I hope to move to http://ellcc.org someday but it's not
ready yet.

> Really looking forward to hearing from you,
> Best regards,
> Michal Kidawa

I'm not sure that helped, but there it is.

Rob



On 11/06/14 03:59, Laurent Vivier wrote:
>
>> Le 6 novembre 2014 à 04:44, Rob Landley <rob at landley.net> a écrit :
>>
>>
>>
>>
>> On 11/04/14 02:57, Laurent Vivier wrote:
>>> Hi,
>>>
>>> this is strange as it is not in a part I modify.
>>>
>>> Did you start from a clean directory ? Did you run "make distclean" ?
>>> What is your configure parameters ?
>>
>> I did a hard reset on the repo (git clean -fdx && git checkout -f) and
>> it built! And it ran my aboriginal root filsystem! And it seems to be
>> doing fork and everything.
>  
> Really nice. Where can I download your disk image ?

It's my stock Aboriginal Linux image. (A contributor got it working
under aranym years ago, so the kernel's wrong, but userspace is the same.)

You can build it yourself by downloading the current aboriginal linux
version and doing "./build.sh m68k" or downloading the prebuilt binary
from http://landley.net/aboriginal/bin/system-image-m68k.tar.bz2
(The squashfs root filesystem image is hda.sqf in there.)

For the kernel I did a checkout of 3.16 and then:

make ARCH=m68k mac_defconfig
sed -i 's/\(.*\)=m/# \1 is not set/' .config
make ARCH=m68k oldconfig

Then fire up menuconfig and switch on:

CONFIG_SQUASHFS=y
CONFIG_SQUASHFS_FILE_DIRECT=y
CONFIG_SQUASHFS_DECOMP_SINGLE=y
CONFIG_SQUASHFS_ZLIB=y

Then built it using aboriginal's m68k cross compiler (also at the above
binary tarballs URL). The qemu command line is:

m68k-softmmu/qemu-system-m68k -m 256 -M q800 -kernel
~/linux/linux/vmlinux -drive
file=~/aboriginal/aboriginal/build/system-image-m68k/hda.sqf,if=scsi
-append "console=ttyS0 root=/dev/sda init=/sbin/init.sh" -nographic

Which is sad because all the other targets use -hda, -hdb, and -hdc to
do automated builds. (-hda is the squashfs root fielsystem, -hdb is 2
gigs of writeable ext2 scratch space mounted on /home, and -hdc is a
build control image mounted on /mnt. The init script checks for
/mnt/init and runs that instead of a shell if it's there, so I can do
"./native-build.sh ../lfs-bootstrap.hdc" using
http://landley.net/aboriginal/control-images and build Linux From
Scratch automatically under the emulator.

I also haven't tried using the emulatednetwork card yet, but I'd need
that to hook up distcc and speed up the native builds by calling out to
the cross compiler.

I did a presentation about all this years ago:
https://speakerdeck.com/landley/developing-for-non-x86-targets-using-qemu

>> In fact, I compiled my thread-hello2.c with the native toolchain and the
>> result ran, so basic thread support and dynamic linking and everything
>> are working.
>  
> There is a problem in the MMU emulation that seems not be triggered by
> your libc.

It's uClibc 0.9.33.2 (uClibc's last-ever release):

http://landley.net/hg/aboriginal/file/828d2e318e26/download.sh#l24

I patch it a bit locally:

http://landley.net/hg/aboriginal/file/828d2e318e26/sources/patches

The "fixm68k" one is probably relevant, but that just disables -Os to
avoid a compiler bug in gcc 4.2.1 and binutils 2.17 (the last GPLv2
releases; I don't distribute GPLv3 binaries unless paid to do so by an
employer).

> By default, in debian, the elf loader, ld.so, load the the binary but
> links symbols to libraries on demand. It works the first time (parent),
> but doesn't work if it is needed in the chilldren.

I don't think uClibc has lazy binding. (And I don't think musl-libc
plans to implement it either? You'd have to ask dalias.)

> I think there is an
> issue with the management of minor page faults (but I didn't have the
> time to really look deeper in this). There is a workaround with glibc by
> setting LD_BIND_NOW in the environment to avoid this case.
>
>> I note that -hda is not providing the hard drive image (but the longer
>  
> In fact "-hda" provide a disk on IDE/SATA controller. Q800 has SCSI
> controller, it's why we can't use hda, but a parameter allowing to
> provide the controller type (scsi).

Aboriginal Linux's arm, sparc, and sh4 targets all use /dev/sda as their
root= and all of those are connected to the -hda argument. Arm is using
-M versatilepb, sh4 is -M r2d, and sparc is using whatever -M defaults
to (but setting -cpu because gcc, linux, and qemu's default version
assumptions don't quite match up there).

In hw/arm/versatilepb.c it sets ".block-default-type = IF_SCSI", maybe
that has something to do with it?

It's useful for me to have "-hdc /path/to/build-control-image.sqf" so
the native-build.sh wrapper can be target independent and not have to
worry about whether this target is using IDE or SCSI or something else.
(The actual mount scripts are using /dev/?da wildcards. 

>> command line from your example code is). Also, the exit/reboot stuff
>> doesn't seem to be hooked up to anything, I have to kill it from another
>> window after the reboot attempt:
>>
>> (:1) /home # exit
>> sd 0:0:0:0: [sda] Synchronizing SCSI cache
>> reboot: Restarting system
>> Unable to handle kernel access at virtual address 4080000a
>> Oops: 00000000
>> Modules linked in:
>> PC: [<4080000a>] 0x4080000a
>> SR: 2700 SP: 0fd55dd0 a2: 0fd4edc0
>> d0: 40800000 d1: 4080000a d2: 00002000 d3: 01234567
>> d4: 8005973c d5: 800689c4 a0: 4080000a a1: 00357d26
>>
>> And so on down the panic...
>  
> Another bug in the MMU...
>
>> Still, excellent work! Thanks!
>  
> Thank you for your help.
>
>> How do I go about helping get it upstream?
>  
> I think it is not ready to be upstream. They don't like things that
> don't work perfectly and some of my patches break the m68k coldfire
> emulation already present in qemu.

A) Release early, release often.

B) I can try to set up a coldfire system and regression test that. I'm
doing nommu stuff for work these days anyway. (Well, coming up to speed
on it.)

> But I think you can pass the word that is working with your disk image
> and share your experience. I should be really great.
>
>>> I've added some help here :
>>>
>>> https://gitorious.org/qemu-m68k/qemu-m68k
>>
>> Apparently the stuff about fork not working is obsolete?
>
> No, I think you are lucky 
>  
> Again, really thank you for the time you have used to test and play with
> this stuff.

If I can get this working, the musl-libc.org guys can use aboriginal as
a test platform to get m68k support working. (Dunno if they _will_, but
the fact I got sh4 to work under qemu is why they did a port...)

> Regards,
> Laurent

Rob

And so on, and so forth...

Rob

 1418511788.0


More information about the Aboriginal mailing list