[Aboriginal] Development tools in /usr/tools

Rob Landley rob at landley.net
Sat May 21 00:49:50 PDT 2011


On 05/20/2011 06:35 PM, David Seikel wrote:
> On Fri, 20 May 2011 06:22:08 -0500 Rob Landley <rob at landley.net> wrote:
> 
>> On 05/19/2011 08:23 AM, David Seikel wrote:
>>> I'm thinking it would be great for my project, and likely good for
>>> others, if there was something like /usr/tools that was a mount
>>> point for yet another disk image, like the one used for building
>>> control images.  This disk image would have all the purely needed
>>> for development stuff on it.  With suitable links if needed.  Then
>>> it can be unmounted to get just the image with the resulting
>>> embedded system on it.  At least that's the theory.  More or less.
>>
>> The native-compiler tarball sort of works like that now.  Note that
>> ccwrap doesn't care where it's installed, as long as all the other
>> directories are relative to it.  So you should be able to extract it
>> on any system in an arbitrary directory and run it from there.
>>
>> The down side is if you don't install the shared libraries on the host
>> you won't be able to run the resulting binaries (unless they're
>> statically linked), because the shared library loader location is
>> hardwired into each binary as an absolute path, that's a limitation of
>> the ELF spec.  You can specify a different location for it to hardwire
>> in (export CCWRAP_DYNAMIC_LINKER=/blah/ld-uClibc.so.0), but that's not
>> much of an improvement.
> 
> I was not talking about the shared libraries.  If using such shared
> libraries in the resulting system, then they must stay.

I believe if you delete the *.so files out of the toolchain, it'll
statically link everything by default.

>>> Are there plans to do such a thing?  I can likely do that myself, it
>>> fits within the scope of my contract.
>>
>> You need to define the problem a little more tightly.  What exactly
>> are you trying to accomplish, and remember the shared libraries you're
>> linking against need to be deployed on target for you to run the
>> result.
> 
> I want to easily separate the tools used to build the result from the
> result.  Shared libraries stay if shared libraries are part of the
> result.  I have yet to decide if shared libraries or static linking
> will be used in general in my project.  At this point, I like my
> options on that matter to be open.

It should work now then?

Hmmm...  Technically I chould teach ccwrap to check for the presence of
the shared library loader at the /usr/bin location, and if it doesn't
find it to either hardwire in the absolute path to wherever the one in
the toolchain lives or to force static linking.

The problems with this are:

A) Which of those options is the correct behavior?

B) This is native compling behavior, not cross compiling, and right now
the wrapper doesn't distinguish between them.

Which is why I haven't done it so far...

> Things like include files, and the various development executables that
> end up in /usr/bin are not needed for the resulting system, only needed
> to build them.  Linker libraries should be included in this part.

The native toolchain tarball already handles that part.  That's why I
packaget that up separately, and why it's built around ccwrap so it's
just as relocatable as the cross compiler is.  (Except for the fiddly
"running the result" issue, which stubbornly cares about where the
shared library loader lives.  I think it'll automatically find the
shared libraries if they're in the same directory as the loader, but I'd
have to look up what other directories it checks and what order it falls
back in.  Building uClibc binaries on a glibc/newlib/bionic/klibc system
is cross compiling too, even if the result runs locally.  When in doubt:
statically link.)

> To help, I'll mention the exact use case I have for my current
> contract.  One of the government requirements for this device is that
> ONLY the data needed to perform the legally sanctioned functions of the
> device must ship on the resulting device.  "Data" in this case refers
> to every single bit in all persistent storage media in the device.

Ah, you probably want to do this then:

Build a simple-root-filesystem version of your target system, but also
build the native compiler tarball:

  NO_NATIVE_COMPILER=1 ./build.sh $TARGET
  ./native-compiler.sh $TARGET

Then fire up the target, and under the emulator:

  Copy your system to writeable space (setup-chroot /home/blah /bin/ash)

  Install the native compiler (extract the native-compiler tarball into
  /home, and add its' bin directory to your $PATH.)

  Swap out the host's libraries for the toolchain libraries:

    rm -rf /usr/lib
    mv /home/native-compiler-$TARGET/lib /usr/lib
    ln -s /usr/lib /home/native-compiler-$TARGET/lib

  Make /bin/sh point to bash:

    ln -sf bash /bin/sh

That gives you the development environment but with the toolchain living
under /home, so you can conveniently delete it later.

Does that sound about right?  (Read root-filesystem.sh to see how it
splices the two together, that's basically what it does.)

> The audit labs needs all source code, and the ability to recreate the
> "data" from that source code, as well as a sample of the shipping
> device.  So, I want to hand the audit labs the device as it will ship,
> and a hard drive. The hard drive will include the source code, and the
> development environment we use to build it. This development
> environment is what I'm building now using Aboriginal Linux as the base.
> 
> The audit lab can plug the hard drive into the sample device, run my
> build script, and come back hours later to see that it has replicated
> itself, and created a disk image that matches the one in the device.
> This plus copious documentation should make the lab very happy.  A very
> happy audit lab will charge less, making my client very happy.
> 
> Sooo, I need to be able to separate the build environment from the
> finished system, then mount it back in later.  To help shake out bugs
> from this process, that's EXACTLY what I will use for the development
> itself.  Being able to strip the finished system down to the bare
> minimum is a goal as stated above.

I think the above may do what you want then?  (Lemme know, if not I'll
come up with something.)

> Yes, I'm well aware that I have no source for the BIOS

http://www.coreboot.org/Welcome_to_coreboot
http://www.coreboot.org/Supported_Motherboards
http://blogs.coreboot.org/blog/2011/05/06/amd-commits-to-coreboot/

The project formerly known as "LinuxBIOS" grew up. :)

Rob

 1305964190.0


More information about the Aboriginal mailing list