<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 12, 2022 at 10:55 AM Rob Landley <<a href="mailto:rob@landley.net">rob@landley.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 5/11/22 15:00, enh wrote:<br>
> On Wed, May 11, 2022 at 7:16 AM Rob Landley <<a href="mailto:rob@landley.net" target="_blank">rob@landley.net</a>> wrote:<br>
>><br>
>> On 5/10/22 12:04, enh wrote:<br>
>> > right now i think the "can't bootstrap without an existing toybox<br>
>> > binary" is the worst mac problem. (i think there's already a thread<br>
>> > about how your sed skills are too much for BSD sed...)<br>
>><br>
>> It has a SED= environment variable so you can point it at gsed on mac, but<br>
>> GETTING gsed on the mac is outside my expertise...<br>
> <br>
> yeah, that's the "homebrew" i was talking about. (for all i know, it<br>
> might actually be easier to just download and build gnu sed alone, but<br>
> "if you're planning on using a mac for development, you'll want<br>
> homebrew sooner or later" has meant i've never yet not given in and<br>
> installed the whole thing.)<br>
<br>
You know, if we get enough of toybox running on mac and AOSP already has<br>
toolchain binaries...<br>
<br>
Meh, I'm not volunteering my time to make Tim Cook richer. The FSF guys can<br>
"properly" support the mac the same way they did cygwin.<br>
<br>
<a href="https://www.youtube.com/watch?v=g3j9muCo4o0" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=g3j9muCo4o0</a><br>
<br>
>> > (this morning i had them ask "does toybox tar support $TAR_OPTIONS?"<br>
>><br>
>> $ man tar | grep TAR_OPTIONS<br>
>> $<br>
>><br>
>> I don't know what that is?<br>
> <br>
> i was about to celebrate (because i'd already said to them that i<br>
> personally _hate_ `GREP_OPTIONS` _because_ it messes with hermetic<br>
> builds unless you know about it and explicitly clobber it,<br>
<br>
I squashed those with env -i :<br>
<br>
<a href="https://github.com/landley/toybox/blob/master/scripts/mkroot.sh#L5" rel="noreferrer" target="_blank">https://github.com/landley/toybox/blob/master/scripts/mkroot.sh#L5</a><br>
<br>
That said, it means mkroot is not supporting distcc and ccache, despite<br>
<a href="https://github.com/landley/toybox/blob/master/scripts/install.sh#L120" rel="noreferrer" target="_blank">https://github.com/landley/toybox/blob/master/scripts/install.sh#L120</a> supporting<br>
them...<br>
<br>
> and the<br>
> idea of having random other commands grow similar warts doesn't<br>
> exactly fill me with joy) ... but then i noticed you only said "man",<br>
> and this is a gnu thing, so _of course_ the man page won't mention it.<br>
> how else could they make you use their stupid "info" crap?<br>
<br>
It wasn't in tar --help either. :P<br>
<br>
> anyway, checking whether this is a real thing the One True Way:<br>
> <br>
> $ strings `which tar` | grep OPTION<br>
> TAR_OPTIONS<br>
> cannot split TAR_OPTIONS: %s<br>
> [OPTION...]<br>
> <br>
> it's also described on the web:<br>
> <a href="https://www.gnu.org/software/tar/manual/html_section/using-tar-options.html" rel="noreferrer" target="_blank">https://www.gnu.org/software/tar/manual/html_section/using-tar-options.html</a><br>
> <br>
> (but i still think it's a bad idea, personally.)<br>
<br>
alias tar='tar $TAR_OPTIONS'<br>
<br>
>> > wrt to <a href="https://android-review.googlesource.com/c/kernel/build/+/2090303" rel="noreferrer" target="_blank">https://android-review.googlesource.com/c/kernel/build/+/2090303</a><br>
>> > where they'd like to be able to factor out the various "reproducible<br>
>> > tarball please" options [like in the toybox tar tests].)<br>
>><br>
>> It supports --owner and --group and I made it so you can specify the numeric IDs<br>
>> for both with the :123 syntax so you can specify a user that isn't in<br>
>> /etc/passwd. (Commit 690526a84ffc.)<br>
> <br>
> yeah, that's what they want to not have to keep repeating.<br>
<br>
Is the alias solution sufficient? (In theory that lets you add this support to<br>
any command without the command having to know...)<br></blockquote><div><br></div><div>(yeah, sounds like they're happy with the alias.)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Checking the corner cases:<br>
<br>
$ alias freep='echo $POTATO'<br>
$ freep walrus<br>
walrus<br>
$ POTATO=42 freep walrus<br>
walrus<br>
$ POTATO=42<br>
$ freep walrus<br>
42 walrus<br>
<br>
It's not QUITE a full replacement because the prefixed environment variables are<br>
set after command line options are evaluated. (Well, technically what's<br>
happening is they're only exported into the new process's space and the command<br>
line is evaluated against the parent's environment variable space.)<br>
<br>
And yes, I need to get this right in toysh, where "right" matches bash...<br>
<br>
In theory I could add a global "$COMMAND_OPTIONS" that automatically picks them<br>
up for each command name, which would get grep and tar and ls and rm and<br>
everything. In practice, that sounds horrific and is GOING to have security<br>
implications somehow...<br></blockquote><div><br></div><div>exactly. on the one hand "if you're going to do any $<FOO>_OPTIONS you really should do all of them" but on the other "omg, i don't want to have to deal with all the fallout".</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
>> Meanwhile I was hitting<br>
>> <a href="https://lkml.iu.edu/hypermail/linux/kernel/1002.2/02231.html" rel="noreferrer" target="_blank">https://lkml.iu.edu/hypermail/linux/kernel/1002.2/02231.html</a> regularly. Right<br>
>> now I'm trying to add a coldfire toolchain to to mkroot and it's all<br>
>> <a href="https://www.denx.de/wiki/U-Boot/ColdFireNotes" rel="noreferrer" target="_blank">https://www.denx.de/wiki/U-Boot/ColdFireNotes</a><br>
>><br>
>> > Since gcc team seems to keep m68k issues in a very low priority, these<br>
>> > toolchains have the libgcc.a, libgcov.a and multilibs copied from an old<br>
>> > toolchain.<br>
>><br>
>> Thank you Wolfgang. Thanks EVER SO MUCH. Embedded guys just stop engaging with<br>
>> "upstream" and keep using 10 year old kernels and toolchains because they got it<br>
>> to work once and don't care what the crazy people are off doing. I'm nuts for<br>
>> trying to get current stuff to work on the full range of theoretically supported<br>
>> thingies, including NATIVE COMPILING on them.<br>
>><br>
>> Sigh.<br>
> <br>
> could be worse ... could be a _proprietary_ toolchain from a decade<br>
> ago. not that _that_ ever happens...<br>
<br>
Don't get me started on ARM jtag software. Either add support for your board and<br>
dongle to Open Obsessive Compulsive Disorder or admit you haven't got jtag<br>
support. (But no, that's now how they see it...)<br>
<br>
(And yes, however it goes one of the hardware guys sets it up for me and leaves<br>
me with dangly ribbon cables over my desk and a software package I didn't<br>
install/configure except maybe via rote wiki instructions, but when you're using<br>
stupidly expensive proprietary jtags there's always a finite number of licenses<br>
insufficient to the team at hand and I never get one and wind up standing at<br>
another engineer's desk debugging the problem over their shoulder. Of COURSE<br>
when you have 15 boards and 3 jtags nobody learns to use a jtag and nobody<br>
thinks to apply a jtag to the problem at hand, bit of a chicken and egg<br>
situation there isn't it?)<br>
<br>
>> >> (See, with aboriginal linux I was making my automated Linux From Scratch build<br>
>> >> work for whatever host architecture you ran it on, x86, arm, mips, powerpc, sh4,<br>
>> >> sparc, and so on. 95% of what autoconf dies boils down to 1) I was unaware of<br>
>> >> all the symbols "cc -E -dM - < /dev/null" shows you, 2) #if<br>
>> >> __has_include(<file>) hadn't been invented yet. But unfortunately, if you<br>
>> >> snapshot the output it tries to use the arm answers on sparc, and you have to<br>
>> >> preprepare versions for each target architecture in which case you might as well<br>
>> >> just ship binaries? So I put in the work to make it actually perform its stupid<br>
>> >> dance and get the right answers, so that when I added m68k or s390x it would<br>
>> >> mostly Just Work. Not having autoconf at all is, of course, the much better<br>
>> >> option...)<br>
>> ><br>
>> > aka "the only winning move is not to play" :-)<br>
>> ><br>
>> > +1 to that!<br>
>><br>
>> I had a rant years ago about how configure/make/install needed to be replaced<br>
>> the way git replaced CVS. Here's a 2-part version, I'm sure I didn't better<br>
>> writeups but can't find them....<br>
>><br>
>> <a href="http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000859.html" rel="noreferrer" target="_blank">http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000859.html</a><br>
>> <a href="http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000860.html" rel="noreferrer" target="_blank">http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000860.html</a><br>
>><br>
>> Unfortunately, all I'd seen when I wrote that was a lot of svn and perforce, and<br>
>> not a real proper "everybody moves to the new thing and universally agrees its'<br>
>> better" complete rethink the way git finally rendered cvs properly irrelevant.<br>
>> And sadly, that's STILL the case. (Otherwise we wouldn't have this<br>
>> cmake/ninja/kaiju cycling every 5 years with the kernel still using gmake.)<br>
> <br>
> i think the trouble is that no-one's found the "big thing" here that<br>
> git was able to offer. i don't think we're in the git/bk/bzr/hg/...<br>
> phase, i think we're still in the cvs/svn phase.<br>
<br>
+1 to that!<br>
<br>
> version control also had the advantage that you could use the same one<br>
> for all languages; every individual language community seems to have a<br>
> strong preference for "their" build system, even if/though it's<br>
> useless for everyone else.<br>
> <br>
> i wouldn't hold my breath for this getting any better before we're all retired.<br>
<br>
A big advantage of scripting languages is you don't build. You run the source<br>
code, there's no "make" step.<br>
<br>
Sigh, back before Eric Raymond succumbed to Nobel Disease (he didn't even need<br>
to win the award, but neither did Bill Joy, Richard Stallman, Richard<br>
Dawkins...) we were working on a paper about the two local peaks in language<br>
design space, and how C was kind of the "static everything, implementation<br>
completely transparent" hill and scripting languages covered the "dynamic<br>
everything, implementation completely hidden" hill, and in between you had a<br>
no-mans-land of languages that tried to half-ass it and leaked implementation<br>
details up through thick layers of abstraction.<br>
<br>
C exposes all the implementation details and gives the programmer complete<br>
manual control of everything (including resource allocation), which is a tedious<br>
but viable way of working. Even stuff like alignment and endianness are ok as<br>
long as you avoid using libraries that make assumptions: the programmer gets to<br>
make all their own assumptions, and when it breaks you get to keep the pieces<br>
and weld them back together in a new shape.<br>
<br>
In something like Python, everything is reference counted and you can call a<br>
method on an object that isn't there, catch the exception, ADD THE METHOD<br>
(modifying the existing object), and then retry. Your container type is based on<br>
a dictionary, which might be a hash table under the covers, or might be a tree,<br>
or even a sorted resizeable array it's binary searching to look stuff up in...<br>
and it doesn't MATTER because it's completely opaque and just works. They could<br>
change HOW it works under the covers every third release and it's not your<br>
problem, the implementation details never leak into the programmer's awareness<br>
except as performance issues, and that you can just throw hardware at.<br>
<br>
It was a long paper, we wrote at least 2/3 of it before our working relationship<br>
broke down circa 2008. I'm still kind of sad we didn't get to finish it...<br>
<br>
Anyway, the point is people working in python/ruby/php/javascript/lua/perl don't<br>
need a make replacement, except for any native code they're doing.<br>
<br>
> (i'll let the reader decide for themselves whether rpi pico<br>
> introducing embedded folks to cmake is a positive step or not :-) )<br>
> <br>
>> Rob<br>
> <br>
> P.S. since i had a few minutes before my next meeting, i gave in and<br>
> built gnu sed from source ... it took literally _minutes_ to run<br>
> configure on this M1 mac, and then a couple of _seconds_ to actually<br>
> build. so so wrong...<br>
<br>
I know!<br>
<br>
<a href="https://landley.net/notes-2009.html#14-10-2009" rel="noreferrer" target="_blank">https://landley.net/notes-2009.html#14-10-2009</a><br>
<br>
Back under Aboriginal Linux, I was running a build system under qemu that used<br>
distcc to call out to the cross compiler running on the host (through the<br>
virtual 10.0.2.2->host 127.0.0.1 thing), which moved the heavy lifting of<br>
compilation outside the emulator and let me do about a -j 3 build. (The QEMU<br>
system would preprocess the file, stream out the resulting expanded.c, read back<br>
in the .o file, and then link it all at the end. I was looking at using tinycc's<br>
preprocessor instead of gcc's because that might let me do more like -j 5<br>
builds. QEMU used a single host processor so you didn't usefully get SMP within<br>
the VM.)<br>
<br>
This meant the actual COMPILE part was reasonably snappy, but the configure<br>
stage could literally take 99% of the build time. So what I did was statically<br>
link the busybox instance that was providing most of the command line utilities,<br>
which sped up ./configure by 20%.<br>
<br>
(Part of this was an artifact of how QEMU works: it translated a page at a time<br>
to native code, with a cache of translated code pages. Every time an executable<br>
page was modified, the cached translated copy got deleted and would be<br>
re-translated when it tried to execute it. Doing the dynamic linking fixups not<br>
only deleted the translated codepages, but it reduced the amount of sharing<br>
between instances because the shared pages got copy-on-write when they were<br>
modified. These days they collate more stuff into PLT/GOT tables but that just<br>
partially mitigates the damage...)<br>
<br>
But yes, autoconf is terrible, it doesn't parallelize like the rest of the build<br>
does, 90% of the questions it asks can be answered by compiler #defines or are<br>
just TOO STUPID TO ASK IN THE FIRST PLACE:<br>
<br>
<a href="https://landley.net/notes-2009.html#02-05-2009" rel="noreferrer" target="_blank">https://landley.net/notes-2009.html#02-05-2009</a><br>
<br>
And then of course, half of cross compiling is best described as "lying to<br>
autoconf". (It asks questions about the HOST and uses them for the TARGET.)<br>
<br>
Rob<br>
</blockquote></div></div>