[Toybox] Toybox Installer/setup routine?

Rob Landley rob at landley.net
Wed Oct 30 15:57:34 PDT 2019


I _still_ have back open windows from the pile that accumulated on my trip to
canada. And am answering them from Japan.

On 9/13/19 5:01 PM, enh wrote:
> On Fri, Sep 13, 2019 at 1:48 PM Rob Landley <rob at landley.net> wrote:
>>
>> On 9/11/19 4:42 PM, enh wrote:
>>> On Wed, Sep 11, 2019 at 2:28 AM Rob Landley <rob at landley.net> wrote:
>>>> That said, once such a thing exists and isn't GPLv3, using it isn't _that_ bad.
>>>> (I'm all for replacing make, but "with what" still isn't answered at the design
>>>> level. If it was gonna be ninja, why would ninja have been replaced with
>>>> whatever's generating ninja now? And somebody other than Google would be using
>>>> it. *shrug* Maybe that'll resolve eventually, but it hasn't yet.)
>>>
>>> ninja's a lot more widely used than you seem to think, but the mistake
>>> is that you're considering it to be a replacement for make. it's not:
>>> it's a replacement for make's back-end. so cmake, for example, can use
>>> ninja now, and every cmake-using project may well be using ninja. the
>>> point of ninja is that folks can keep dicking about with what the
>>> "build system" looks like without having to keep reimplementing the
>>> back-end stuff.
>>
>> Why does make need a back-end? It's repeatedly calling the compiler and linker.
> 
> not only is it not directly a replacement for make, it's also not a
> back-end for specifically _make_ (though Android's kati was a hack
> that let it be that in the early stages of our move away from make),
> it's a back end for build systems. your home-grown "configure" could
> output a ninja file, for example. (but since you're unlikely to ever
> grow to a size that would really win anything back, .)

I'm actually trying to figure out how to get _rid_ of my home-grown configure.
Literally all it does (by design) is set default values for environment
variables, and the problem is the variable "GENERATED" pointing to the generated
directory (so that can be relocatable) needs to be accessible from both shell
and makefile context. And Single Point of Truth: I don't want it defined in
_two_ places if I can all avoid it...

And I haven't got a context that does both, and "run it through sed" has a bit
of a chicken and egg problem, because you can't source a generated file unless
you write it somewhere and "generated" is kind of where you'd...

Ok, I've come up with a disgusting solution. As I do. Have configure contain the
?= syntax we can source from Makefile and write a shell function to parse that
and do the exports. If so I should rename it from "configure" to "config" or
something. People are used to running "configure" (which was never right for
this)...

No, darn it, the problem is that variables build on other variables. The first
assignment is CROSS_COMPILE?="$CROSS" and that would need the parentheses for
makefile syntax. Sigh, I can sed those away I suppose...

And then there's the ASAN block, which isn't an if (!set) test. Hmmm... Then
again this doesn't necessarily belong _here_, it belongs in scripts/make.sh, so
I can just move it.

Sigh, this one's tricky too:

# We accept LDFLAGS, but by default don't have anything in it
if [ "$(uname)" != "Darwin" ]
then
  [ -z "$LDOPTIMIZE" ] && LDOPTIMIZE="-Wl,--gc-sections"
  LDASNEEDED="-Wl,--as-needed"
fi

It's a pity the actual open source Darwin isn't a thing. (Many moons ago I
bookmarked https://lists.gnu.org/archive/html/qemu-devel/2008-02/msg00444.html
in case I ever wanted to set up a macos build environment under qemu, but when I
_had_ a mac I never got xcode working, and trying to use Linux under its
virtualbox was <a href="http://www.landley.net/notes-2015.html#25-06-2015">not a
pleasant experience</a>...

Ahem. You can tell I've been editing a lot of back blog entries when I do that
without thinking, can you? (See also "jetlag".)

> when you have millions of files (on the one hand) and tens of cores
> (on the other), making good use of that gets interesting. when you can
> also have a completely different execution model that builds remotely
> (on thousands of cores) and minimizes the amount of stuff pulled back
> you your local machine (because you don't actually need any of those
> .o files, say), well, maybe it's not a great software engineering idea
> for everyone to have to rewrite that from scratch too, just because
> they're wedded to make or cmake or xcbuild or msbuild or gradle or
> meson or whatever.
> 
> https://ninja-build.org/manual.html talks about this more.
> 
> but as long as your build only takes a few seconds on a single core,
> you're probably not going to get anything from ninja. (you don't
> really need to invent sewers until you're living in cities either.)

I remember why I had this window open, it's a "research this" todo item. Ok, cut
and paste that to the relevant todo.txt file...

Many moons ago (2008? Yeah, https://landley.net/notes-2008.html#18-12-2008
around then) Mark and I designed a cluster build system that would take a distro
repository with dependencies between packages and and install it into a shared
network filesystem. You'd have a master node in the cluster with write access to
the shared filesystem, and it would would tell the other nodes "here's the next
package to build" and they'd do so relative to their read-only copy of that
filesystem (which had all the prerequisites for that package already installed).
It would build the binary (.rpm or .deb or...) for that package and send it back
to the master node, which would install it into the network filesystem and send
out the next thing to build.

The goal was to populate a repository from scratch using the existing distro
metadata. That way you could for example queue up debian and do an m68k build,
sh4 build, mipsel build, etc, and wind up with all the repo trees of all the
packages built from scratch within a human lifetime.

This was the goal of the "create a base root filesystem for an arbitrary distro"
stuff (https://landley.net/aboriginal/about.html#hairball) and the "build in
qemu with distcc calling out to the cross compiler" stuff. It's also one of the
reasons I did an automated Linux From Scratch build
(https://landley.net/notes-2010.html#01-12-2010) because bootstrapping a new
distro under Linux From Scratch generally _didn't_ have the "this piece of
infrastructure is missing/broken" problem that the minimal native development
environment did. (The theory was get it reproducibly working first, _then_ pare
it down.)

Alas the first "base OS layer" we tried to create was for gentoo, most of the
documentation for which is lost to history because Mark designed a Wordpress
site that archive.org couldn't archive properly (because gratuitous database),
and when the server went down the data went away. (I tried to salvage what I
could https://landley.net/notes-2009.html#17-11-2009) but at least the git repo
and README I did for it is still there:

https://github.com/landley/control-images/blob/master/images/gentoo-bootstrap/README

Anyway, gentoo turned out to be a HORRIBLE decision because gentoo's portage
tree has a fundamental design flaw : every single package in the entire portage
tree lists every single hardware platform it supports as one of the metadata
fields in the ebuild file, meaning attempt to do a distro agnostic build (just
build for whatever host you're currently on) is IMPOSSIBLE in gentoo without a
complete redesign, and adding a _new_ architecture that they don't currently
support requires touching EVERY SINGLE PORTAGE FILE IN THE TREE.

And THAT's why I wound up working with Daniel Robbins to try to fix it:

https://landley.net/notes-2011.html#26-12-2011
https://landley.net/notes-2011.html#30-12-2011

Which unfortunately didn't pan out and these days I keep meaning to tackle
debian instead... But in the meantime I switched horses from busybox to toybox,
from uClibc to musl-libc, and from aboriginal linux to mkroot+musl-cross-make.
And vaguely want to try to make the Android NDK work, except the _only_ host
that supports is x86-64 so you can't even use it on an ARM system (which seem
short-sighted, and means "download the NDK onto an android system" ain't a
useful step in making a self-hosting system)...

Let alone "can I get Android to run on j64 someday"... :)

Given the above context, whenever I bump into something like Ninja my response
is the same as Miss Sweetie Poo at the Ig-Nobels. This is not generic. This does
not saperate the problem into layers. There is no way in which this _reduces_
complexity. As our ancestors used to say, "I cannot even."

>> At JCI last year I hung out with the guy rewriting their build system in cmake.
>> He could talk for hours about weird bugs and version skew in cmake, and he was
>> trying to figure out how to _not_ be the single point of failure for the entire
>> department when it came to build systems, because the cmake files everybody else
>> were trying to write kept doing things subtly wrong and introducing all sorts of
>> bugs he got called in to fix. (Many of which I heard about because I'm strangely
>> good at debugging things I know nothing about, so he'd come over to bounce the
>> latest bug off me whenever he got blocked so I could ask stupid questions or
>> suggest debugging approaches.)
> 
> as far as i can tell, if you're only targeting one OS, cmake just
> causes you problems. folks who're targeting multiple OSes, especially
> if one of those is Windows (and you're not just cross-compiling like
> we do), seem to like it though.

Windows has "Windows Subsystem for Linux", cygwin, mingw, and midipix.

I remember when Apache decided that its "scoreboard" shared memory approach was
impossible to implement well on Windows, so they rewrote the entire thing to be
threaded instead, and that was the "Apache 1.x -> 2.0" flag day change that
users everywhere EXCEPT windows resisted to the death because it made everything
worse. And when the Apache 1.x line was finally killed off (about the same way
Python 2.x->3.x took forever because only the language developers actually
wanted it and the users didn't), the result was basically that apache stopped
being relevant and got replaced by other webservers.

Terrible design decisions made because windows tend to have a negative impact on
the longevity of the project. Most people don't notice because "a thing that has
existed for 10 years" is _venerable_ on Windows (Such as chrome from 2008 and...
I googled "when was firefox launched" and google gave me the answer for
_mozilla_, which was a different codebase that started with the netscape engine,
replaced it with gecko, replaced _that_ with galleon, and replaced _that_ with
firefox. Sigh...) Whereas "venerable" on unix is Linux (Linus was there in 1991,
still there today) or the x11 project (Jim Gettys was there in 1984, still there
today).

(Ok, Windows has a lot of stuff bundled with the OS that's Unix levels of
"venerable", like Word and Outlook. But nobody stayed with Outlook because it's
actually good at email or with Word because they like the Ribbon, they stayed
because of monopoly leverage using exclusive distribution contracts tied to
hardware sales to muscle out competitors in the enterprise software space.)

>>> i think the corner has been turned here actually. (obviously we've
>>> been enjoying clang-built kernels for a few years now, but i'm talking
>>> about upstream too.)
>>
>> Not surprised, gcc is end of life. I need to try to get
>> https://github.com/thegameg/j2-llvm merged upstream...

I'm told the guy who was working on that got hired by apple, which is why he
stopped. Well of course...

Rob



More information about the Toybox mailing list