[Toybox] [PATCH] sh: pass "\" to the later app
Rob Landley
rob at landley.net
Sat Jul 8 04:41:10 PDT 2023
On 7/6/23 20:09, Chet Ramey wrote:
> On 7/5/23 3:29 AM, Rob Landley wrote:
>>>>> It's really a useless concept, by the way.
>>>>
>>>> It's not that simple: kill has to be built-in or it can't interface with job
>>>> control...
>>>
>>> That's not what a special builtin is. `kill' is a `regular builtin' anyway.
>>
>> I started down the "rereading that mess" path and it's turning into "reading all
>> the posix shell stuff" which is not getting bugs fixed. And once again, this is
>> a BAD STANDARD. Or at least badly organized. There's three groups here:
>
> OK. This is a decision that was made, what, 45 years ago? These are the
> Bourne shell special builtins -- at least as of SVR4. Korn added a couple,
> but since the Bourne shell didn't have them, they were not added to the
> list.
>
> Special builtins will exit a non-interactive shell on an error, assignments
> preceding them persist, and they're found before shell functions in the
> command search order. That's pretty much it. It's not that the builtins
> have to be implemented interally, but that these have other properties.
>
> They're a POSIX concept, so bash conforms when in posix mode. In default
> mode, every builtin is treated the same.
Blah. bash -p is privileged, not posix. And most of bash's command line options
aren't in the option list at the start of the man page, they're in the set
builtin (and you don't search for ^builtin you search for "^shell builtin" which
I never remember and is one of those "I can't look up how to spell something I
don't know how to spell" things...)
Right. Thanks for the explanation: exit, assignments persist, higher priority
than shell functions.
>> Why group 1 doesn't include "wait" I dunno.
>
> It's not a Bourne shell special builtin: errors in it don't exit the shell.
...
> Distinguishing : from true seems deeply silly
>
> true wasn't a special builtin in the Bourne shell.
It isn't because it wasn't. Historical reasons, no other pattern or logic.
> (especially when [ and
>> test aren't)
>
> Not part of the Bourne shell, only came in in System III, never a special
> builtin.
>
>> and "times" is job control
>
> It's not. It's a straightforward interface to the `times' library function
> (originally system call in 7th edition).
Ah, I thought it was a would list times for children individually. (I've never
used it nor seen anything use it.) So "child processes" here is not restricted
to jobs then...
$ disown sleep 5; times
bash: disown: sleep: no such job
bash: disown: 5: no such job
0m0.029s 0m0.019s
0m1.050s 0m0.159s
Sigh. Just... sigh. (TODO: test if this means immediate children or
grandchildren too. TODO: implement disown -c "command". TODO: figure out what
label each of those 4 different times would have... ah the man page of the
system call it's wrapping has the 4 labeled, I'm guessing it's outputting them
in order. I note that "help times" does not actually explain that if you don't
already know... And it explains the waited for grandchildren but presumably not
the ignore sigchld or reparent-to-init children.)
>>>> I
>>>> remember I did make "continue&" work, but don't remember why...)
>>>
>>> Why would that not work? It's just a no-op; no semantic meaning.
>>
>> Not _quite_ a NOP:
>
> I mean, it creates a child process which immediately exits, but it has
> no effect on the shell other than to note that we created an asynchronous
> child process (which sets $!) that exited. It certainly doesn't affect
> flow control.
The & terminates the statement as usual but the command does not run in a child
process. Especially with continue [n] being able to take an argument, that took
a little special casing in my code.
>> $ for i in one two three; do echo a=$i; continue& b=$i; done
>> a=one
>> [1] 30698
>> a=two
>> [2] 30699
>> a=three
>> [3] 30700
>> [1] Done continue
>> [2]- Done continue
>>
>> Notice the child processes and lack of b= lines.
>
> Why would you expect a b= line?
If it were actually a NOP because the continue ran in a subshell.
> Even if the `continue&' were not there,
> the `;' after the first echo command makes the b= line a separate simple
> command. Who's going to echo `b=$i' and why would they? Maybe if you had
> an `echo' in there instead.
Blah, that's what I meant. :P
>> As far as I can tell, it's NOT more than \$ \\ and \<newline> that get special
>> treatment in this context?
>
> Plus double quote (in double quotes, but not here-documents) and
> backquote.
Aha! I forgot backquote.
>> And it's the short-circuit logic again:
>>
>> $ echo $((1?2:(1/0)))
>> 2
>> $ echo $((1&&(1/0)))
>> bash: 1&&(1/0): division by 0 (error token is "0)")
>> $ echo $((1||(1/0)))
>> 1
>
> That's not the same thing; arithmetic expression evaluation follows the
> C rules for suppressing evaluation.
Which is the short-circuit logic.
>> I hadn't put an "echo" in there, but I'd noticed that \" is already not removed
>> in HERE context. I'd _forgotten_ that it is in "abc" context.
>
> Right.
>
>> I have a vague todo item for that, but the problem is my data structures don't
>> recurse like that so I don't have a good place to stick the parsed blockstack
>> for $() and <() and so on, but it just seems wasteful to re-parse it multiple
>> times and discard it?
>
> It kind of is, but you need to keep the text around for something like
>
> cat <<$(a)
> x
> $(a)
>
> which POSIX says has to work.
Sigh. Test added...
I need to go hole up in a hotel room for 3 months and get toysh to pass all its
existing tests in the test suite! Bash does, but half of mine are notes-to-self
that I still don't handle something.
>> Yeah yeah, premature optimization. I'm fiddling with this stuff a bit anyway for
>> function definitions, but when you define a function inside a function my code
>> has a pass that goes back and chops the inner functions out and puts them in a
>> reference counted list and replaces them with a reference:
>>
>> $ x() { y() { echo why; }; echo ecks; unset -f x; y; }; x; y; x
>> ecks
>> why
>> why
>> bash: x: command not found
>>
>> I don't THINK I can do a local function, it's a global function namespace, they
>> outlive the text block that defined them, and you can still be running a
>> function that is no longer defined, so... reference counting. :P
>
> Reference counting is ok. Bash just copies the parsed function body (x in
> this case) and executes that, then frees it. That way you can let the
> function get unset and not worry about it.
Each time you call the function? Including all the strings?
>> But still, the pipeline list itself isn't what's nesting there. I think. And
>> given that arguments can be "abc$(potato)xyz" with the nested thingy in the
>> middle of arbitrary other nonsense, deferring dealing with that until variable
>> resolution time and then just feeding the string between the two () to
>> do_source() made sense at the time...
>
> You have to parse it to find the end of the command substitution, bottom
> line. You can't get it right otherwise.
I acknowledge that it's not right. I expect to hit something that breaks it deep
in some package build, but I'm holding off that bout of re-engineering until
then because I've got so much else to do and in hopes of coming up with a less
hideous way to represent the result by then...
>>>>>>> The current edition is from 2018.
>>>>>>
>>>>>> Except they said 2008 was the last feature release and everying since is
>>>>>> bugfix-only, and nothing is supposed to introduce, deprecate, or significantly
>>>>>> change anything's semantics.
>>>
>>> When, by the way?
>>
>> When did they say this? Sometime after the 2013 update went up, before the 2018
>> update went up. It was on the mailing list, but...
>
> I don't remember seeing that is all.
Hmmm... git annotate on roadmap.html to find where I added the 2008 URL says
October 2019, and my corresponding blog entry (October 20) says I found that URL
googling. I'm not finding a relevant email in my inbox from the time period, and
posix gets mentioned a LOT in my inbox, I'm not currently coming up with useful
search keywords to find anything there...
I remember _somebody_ explaining this to me but it's enough years ago (and
three bouts of covid) that I dunno who and I could be mis-remembering/interpreting.
*shrug* Sorry...
>>>> The project isn't dead, but those are defined as bugfix releases. Adding new
>>>> libc functions or command line options, or "can you please put cpio and tar back
>>>> in the command list", are out of scope for them.
>
> cpio and tar were two of those incompatible-never-the-twain-shall-meet
> things, so we have pax (and peace too, I guess).
I don't expect tar to be able to read a zip file, and vice versa. Nor do I
expect them to handle mksquashfs and mkisofs or mtools or...
For a hint how thoroughly posix has been ignored on this one, this page does not
include the string "pax":
https://www.explainxkcd.com/wiki/index.php/1168:_tar
>> It was nice when posix noticed that glibc'd had dprintf() for years, it was nice
>> when they noticed linux had openat() and friends, but it was never a leading
>> indicator.
>
> They don't go out and look for this stuff. Someone has to write a proposal
> in the proper format and shepherd it through. Look at how long it took
> for $'string'.
>
>> When they removed "tar" and "cpio", Linux didn't. (Initramfs is cpio.
>> RPM is cpio.) Nobody installs "pax" by default.
>
> $ type -a pax
> pax is /bin/pax
>
> If you want to pass a certification test, you do.
Back in the 90's that certification gated federal procurement contracts, but
FIPS 151-2 was withdrawn as obsolete in 2000:
https://www.federalregister.gov/documents/2000/02/25/00-4512/announcing-approval-of-withdrawal-of-thirty-three-federal-information-processing-standards-fips
Apple only got posix certification because the Open Group were suing them for
$200 million over use of the unix trademark in advertising:
https://www.quora.com/What-goes-into-making-an-OS-to-be-Unix-compliant-certified
Linux was offered a discount on a posix certfication crucible back in the 90's,
but Linus declined because parts of posix were "obviously dumb":
https://www.computerworld.com/article/2798532/linux-in-government.html?page=2
I'm aware you _can_ install pax. You can also install sccs. (I note that "ed" is
still installed by default on linux, but pax is not.)
>> Document, not legislate...
>
> Except back in 1990 where the tar folks and the cpio folks both politely
> told each other to pound sand, and that they'd never approve the rival
> format and utility, and POSIX had to do something.
"Something must be done, this is something, therefore we must do it." - Humfrey
Appleby, Yes Prime Minister.
I'm aware they did it. It failed. They will not acknowledge that.
>> The Apple vs Franklin decision extended copyright to cover binaries in 1983,
>> clearing the way for AT&T to try to commercialize the hell out of System III/V
>
> I think the 1982 decision that allowed at&t to get into the computer and
> software business after giving up its telephony monopoly had more to do
> with it, but that certainly helped at&t.
Somewhere around here I have Lawrence Graham's book "Legal battles that shaped
the computer industry" that has a section on this. A lot about the upcoming
change was known in 1982, and there was a whole trend line thing since 1976 and
a lot of people lobbying to change the law.
Here's a recording of 20-something Bill Gates whining about how he testified
before congress that a book explaining the TRS-80 rom with the listing on one
side and explanation on the other side was in his opinion violating his
copyrights, and how UNFAIR it was that congress had refused to change the law to
make his opinion true when William H. Gates III told them what reality should be:
audio: https://landley.net/history/mirror/ms/gates.mp3
transcript:
https://features.slashdot.org/story/00/01/20/1316236/b-gates-rants-about-software-copyrights---in-1980
context: https://maltedmedia.com/books/papers/sf-gates.html
> After that, at&t and its "consider it standard!" campaign eventually did
> the job.
Robert Young's book "under the radar" has multiple chapters on this, and then I
had to research even MORE of it for
http://www.catb.org/~esr/halloween/halloween9.html and detail exactly how they
A&T destroyed every part of the community willing to listen to it, scorched the
earth, went around to sell "Amendment X" to everybody for one last bite of that
royalty apple, and then unloaded the corpse on Novell.
(I came in towards the end of that, but Eric started life as a Vax admin and
lived through a decade or so more of it than I did, and pointed me at where the
bodies were buried.)
Oh, and Peter Salus' book "A quarter century of Unix" had a lot of good stuff,
including the role of Dave Cutler, "Chief Unix Hater" at DEC who left to be lead
architect of Windows NT at Microsoft when he finally couldn't keep Unix out of
DEC anymore:
https://landley.net/history/mirror/ms/kanoarchitect.asp
I'm very interested in "how we got here", but a lot of that is Chesterton's
Fence stuff: once I know why it was there, I can remove it.
>> But I still think the main stake to the heart was the Bell Labs guys getting put
>> back in their bottle by AT&T legal, meaning nobody ever saw the Labs' Unix
>> Release 8-10, or got to look at Plan 9 before Y2K.
>
> They weren't really interested in writing software for commercial use, and
> at&t was very interested in commercializing Unix.
They seemed interested in supporting the community of people who used their
software. That's unrelated to commercializing, and in the end was significantly
hampered by it. The geologists got shoved aside by the gold rush prospectors.
I've only communicated with Ken briefly by email and only spoke to Dennis
Ritchie on the phone once. Got to have dinner with Doug McIlroy when he attended
Ohio Linuxfest though, he was quite nice. I saw a FASCINATING history of unix
talk their manager gave at Atlanta Linux Showcase in 1999 in a side room with an
overhead projector, but it wasn't recorded and he's since died and of course
it's not in the proceedings because that wasn't one of the "important" talks. :(
>> The result of $(blah) and $BLAH are handled the same there? Quotes _inside_ the
>> subshell are in their own context.
>
> Yes, that's the point I was trying to make.
>
>> Hmmm... Smells a bit like indexed arrays are just associative arrays with an
>> integer key type, but I guess common usage leans towards a span-based
>> representation?
>
> It depends on whether or not you want to support very large arrays. The
> bash implementation has no trouble with
>
> a=( [0x1000000]=$'\371\200\200\200\200' [0x1000001]=$'\371\200\200\200\201'
> [0x1000002]=$'\371\200\200\200\202' [0x1000003]=$'\371\200\200\200\203'
> [0x1000004]=$'\371\200\200\200\204' )
>
> Which will eat huge amounts of memory if you use a C-type array. Bash uses
> a doubly-linked list with some saved indices to make sequential access
> very fast.
Depends on your definition of "large array". 5 entries large, or large address
space large.
(And in _theory_ Linux does demand page faulting to populate virtual memory's
redundant mapping of the zero page with actual physical backing pages to handle
that sort of thing, but once you've washed it through malloc's heap management
it's a lot less reliable about it, and zeroed physical pages don't go BACK to
virtual, and the 4k granularity isn't always ideal, and...)
But yeah, point taken.
>>> You just have to be
>>> really disciplined about how you treat this `exists but unset' state.
>>
>> $ export WALRUS=42; x() { local WALRUS=potato; unset WALRUS; WALRUS=abc;
>> > echo $WALRUS; env | grep WALRUS;}; x
>> abc
>> WALRUS=42
>>
>> Ok, now I'm even more confused. It's exporting inaccessable values? (I know that
>> you can export a local, which goes away when the function returns...)
>
> Creating a local variable, which does not inherit the attributes from any
> global variable, does not cause the environment to be recreated.
I had a test that said the environment is recreated at child process launch
time, but I'd have to go back through 3 years of blog entries to see what test
that _was_ at this point.
>>>> Anyway, that leaves VAR_ARRAY, and VAR_DICT (for associative arrays). I take it
>>>> a sparse array is NOT a dict? (Are all VAR_ARRAY sparse...?)
>>>
>>> The implementation doesn't matter. You have indexed arrays, where the
>>> subscript is an arithmetic expression, and associative arrays, where the
>>> subscript is an arbitrary string. You can make them all hash tables, if
>>> you want, or linked lists, or whatever. You can even make them C arrays,
>>> but that will really kill your associative array lookup time.
>>
>> Eh, sorted with binary search, but that has its own costs...
>
> Resorting the array (or rebalancing a tree, or whatever) every time you add
> a value? That's more work than is worth it.
>
>> Again, sounds like an indexed array is just an associative array with an integer
>> lookup key...
>
> Sure, if you want to look at it that way.
>
>>
>>>> Glancing at my notes for any obvious array todo bits, it's just things like "WHY
>>>> does unsetting elements of BASH_ALIASES not remove the corresponding alias, does
>>>> this require two representations of the same information?
>>>
>>> There's no good reason, I just haven't ever made that work.
>
> There's no unset hook for dynamic variables.
>
>>>>>> An "initial operand", not an argument.
>>>>>
>>>>> That's the same thing. There are no options to POSIX echo. Everything is
>>>>> an operand. If a command has options, POSIX specifies them as options, and
>>>>> it doesn't do that for echo.
>>>>
>>>> Hence the side-eye. In general use, echo has arguments. But posix insists it
>>>> does not have arguments.
>
> I was never sure what this is supposed to mean. What POSIX calls operands
> are arguments, are they not?
Semantic argument. Them words on the command line after the command name have
properties, and there's groups/families of them with similar properties.
>>> What did you think would happen to the unquoted backslash?
>>
>> I meant asking newbies to learn to use printf from the command line before echo
>> means they have to quote the argument and add \n on the end as part of "simple"
>> usage, which seems a fairly heavy lift.
>
> The sole advantage echo has for a newbie is that it adds the newline.
Depends whether you're trying to get them to learn C at the same time?
Explaining that you _could_ say printf abc:$X or you can say printf abc:%s $X
and there's multiple ways to do it but you're not expected to understand the
difference between them until much later and as long as you never try to print
anything with a % in it you're fine except yes the $X context expanding to %s
can mean something and no quoting it won't help because...
>>>>>> Maybe posix should eventually break down and admit this is a thing? "ls . -l"
>>>>>> has to work,
>
> Why does `ls . -l' have to work?
Because existing scripts use it.
> ls . -l
> ls: -l: No such file or directory
> .:
> [directory contents]
>
> If the Linux folks want to reorder arguments so that things that look like
> options come first, then they can do it as an extension.
Linux could change the way it's worked for the past 30 years for no other reason
than the conform more closely to posix, sure.
This gets us back to "document vs legislate"...
>> You asked why do I think posix doesn't acknowledge $THING today. My experience
>> with raising issues where posix and common usage seemed to have significant
>> daylight between them involved abrasive gatekeeping, resulting in me wandering
>> away again and leaving the historical memorial to HP-UX and A/UX and so on to
>> its own devices.
>>
>> It's possible my experience was unusual?
>
> Not necessarily; Jorg treated a lot of people that way. But the mistake is
> treating him as a representative of anything but himself or a member of the
> working group.
It's not that he was loud, it's that he was never publicly contradicted by the
other members. They _let_ him speak for the group.
When he wandered into other areas (which happened a lot, for a guy that hated
Linux that much he spent a lot of time expressing that hatred up close and
personal) he got shot down:
https://lkml.org/lkml/2004/8/8/16
https://slashdot.org/story/06/09/04/1335226/debian-kicks-jrg-schilling
https://lwn.net/Articles/346540/
But in the austin group he was a warmly welcomed senior member whose word held
much weight.
>>> The GNU utilities do all sorts of argument reordering, but that doesn't
>>> mean you're going to get that in POSIX.
>>
>> See "daylight between what posix says and what reality's been doing for
>> decades", above.
>
> POSIX isn't a "let's rubberstamp what Linux is doing despite what other
> implementations do" kind of group.
Document not legislate, chesterton's fence, frame of reference to diverge from...
I'm looking for a Linux standard. I mentioned freebsd's linux emulation layer
and windows WSL and AIX 5l and people writing their own tiny linux-compatible
kernels from scratch (ala https://github.com/vvaltchev/tilck) and how back in
1997 representatives from a half-dozen companies got together to come up with a
common x86 unix binary format and decided "there already is one, it's Linux ELF":
https://web.archive.org/web/20000816002148/http://www.telly.org/86open/
which was 26 years ago. And which led to
https://phys.org/news/2004-08-solaris-os-feature-sun-linux.html and so on.
Saying we must respect the historical diversity of non-linux unix is a bit like
saying DOS needed to respect the diversity of non-PC CP/M on the S-100 systems.
It's certainly a point of view, but that's not really what happened. Yes humans
are primates, but "primate is the only interesting category, not human" isn't
useful to me.
>> When I see reality does not match posix, I do not automatically conclude that
>> reality is wrong.
>
> Your day-to-day computing reality, sure. My day-to-day computing
> environment is different, for example, and in this case, it seems to
> match POSIX.
You've already mentioned you personally took your project through posix
certification. Just possibly a little bit of selection bias there.
If you're using a mac, one of their formative experiences was spending a lot of
money to shield themselves from an enormous amount of legal liability via posix,
and osx is also a descendant of the NeXT system from a founder who has since
died. (The worship of dead radicals, etc.) I understand the historical reasons
why they scrupulously maintain posix compatibility, including the fiduciary
exposure of writing down the dollar value attached to their historical
investment to achieve it.
I also understand why homebrew exists...
>> My point was those are basically the only cases where that requirement exists.
>> The rest of them can "rm dir -r" and what posix says about it doesn't matter.
>
> Sure, on Linux.
My pining has been for a linux standard.
>> (And yes I have a TODO item to have wildcards expand to "./-file" as necessary...)
>
> Contortions like that are why argument reordering is a bad idea.
Sure, but that's been how Linux works since about 1993.
Linux was written under minix, using solaris manuals out of a library, with the
advice of BSD developers taking a break in the aftermath of the BSDi lawsuit,
and sucking in random gnu crap because it was there.
Linus asked for a copy of posix two months before announcing his kernel:
https://www.cs.cmu.edu/~awb/linux.history.html
But it was paywalled, and remained paywalled for another decade. Making posix
freely available in the 2000s was a bit like relicensing minix under BSD in
April 2000: that ship had sailed.
If Linux _was_ posix compatible than FIPS 151-2 probably wouldn't have been
revoked. A big pressure for revoking it was the procurements wanted to use Linux
instead.
>> There are instances where they've been good, yes. Removing tar was "legislate,
>> not document" and they explicitly refused to acknowledge that it was a mistake
>> over a decade later.
>
> Refer to my previous comment about pounding sand. The standard would not
> have been approved in 1992 with tar and cpio. There were a lot more
> companies with a stake in it back then.
Chesterton's fence again.
>> The "cathedral" in "The Cathedral And the Bazaar" was the
>> GNU project, as mentioned in the paper's abstract on the 1998 Usenix website
>> https://www.usenix.org/conference/1998-usenix-annual-technical-conference/software-development-models-cathedral-and-bazaar
>
> Kind of. It was mentioned, and used as an example, but Kirk giving the talk
> with esr kind of biased the Cathedral model towards BSD.
I didn't meet Kirk until 2008 but hung around with Eric a lot in the early
2000s. (I crashed on the couch in his basement for 3 months doing an "editing
pass" on The Art of Unix Programming that doubled the size of the book, and he
mentioned in http://www.catb.org/esr/writings/taoup/html/pr01s06.html almost
calling me a co-author.) So I admit my sources here are highly biased. :)
>> It's kinda bureaucracy-ish.
>
> As the stakes rise, and the scope grows, processes grow to meet them. The
> culture changes.
>
>
>> I have a whole bunch of blue sky todo items, but my _focus_ is getting A)
>> Android self-hosting,
>
> Yeah, there's a ways to go.
>
> https://lists.gnu.org/archive/html/help-bash/2023-06/msg00117.html
>
> They mess up the simple stuff.
I want them to create what I've been calling a "posix container", although this
conversation is convincing me more and more of the irrelevance of posix. But the
point is, their security model predates containers by several years, so they did
a lot of things like giving every app its own UID and spraying the system down
with extensive SELinux rules, and they have backwards compatibility concerns
with their old decisions.
But they ship a billion unadministered systems with 24/7 broadband access,
geolocation, a microphone capable of hearing the entire room (you think
speakerphone mode doesn't work just because the screen's off and it's in your
pocket?) and people do banking through these things. They do NOT want a warhol
worm or "evil maid" attack in that environment, so it's all VERY LOCKED DOWN.
I did a contract for Parallels a decade ago (when was that...
https://landley.net/lxc/ says 2011) where I was on the team porting the early
containers plumbing from openvz to vanilla Linux. (It started life at a russian
bank in 1999 and developed for a decade before they seriously tried to mainline
it, but Linus threw up all over their "lets add a zillion new syscalls" design
and instead said it should use synthetic filesystems as the control mechanisms,
so there was a lot of rewriting.)
Android shipped to the public in 2008 but containers in vanilla linux only
really became a viable thing around 2012 and took a few more years to be
properly loadbearing, and I was making puppy eyes at them to look at it for a
couple years and then they did minijail (https://lwn.net/Articles/700557/) and a
little of it is integrated into the bionic bootloader and there's the whole
"zygote" thing and their PID 1 daemon that I've never properly wrapped my head
around... but anyway, they now use container plumbing in another layer on top of
what was already there.
But what I want is a container within which I can run a group of build programs
in some variant of a chroot with reasonably conventional Linux semantics. And
when I raised this to Elliott he pointed out things like the android package
format doesn't have a way to request a uid/gid _range_ (it just gets assigned
_one_ at install time), so my desire for "container-local root and container
local guest" would require host-side work...
I'm trying to get the AOSP build working in mkroot before going back and making
fresh puppy eyes at them...
>> Eventually the Alpine Linux guys came along and built a distro around the work
>> I'd done (after I'd already left it behind, but hey).
>
> Isn't that the default Linux image for Docker?
...mostly yes? (There's politics. Define "default". People argue. But it's a
very common one, yes.)
>> Plus make and bash, which can't be external gpl packages _and_ ship in the
>> android base image.
>
> Thorsten would be happy for android to keep using mksh, I'm sure.
>
>> Devuan is a thin skin over Debian, when I ask about this sort of thing on the
>> #devuan libra.chat channel they point me at
>> https://packages.debian.org/search?keywords=bash and similar.
>
> Debian still has bug reports on their bash page from 2005; how am I
> supposed to take that seriously?
>
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=335642
Ah yes, the famous "debian stale".
*shrug* Gotta run something. I gave up on xubuntu when the last version without
systemd went out of LTS.
It's better than ~2003 when I was installing knoppix on my hard drive because
the "linux on the desktop" bubble had burst and nobody else was really trying to
make it work. Red Hat exited the desktop market for "enterprise", SuSE filed
bankruptcy, the maintainer of Slackware had a chronic illness and still shipped
a 2.4 kernel years after 2.6 came out, debian was paralyzed by flamewars and had
like a 5 year gap between releases, a hoard of debian developers fleeing the
flamewars landed on gentoo and outnumbered the original community something like
3 to 1 and did the White Man's Dance ("I heard this was a civil
technical-focused community but since we arrived it's been nothing but
flamewars, we were lied to!")
Ubuntu circa 2005 finally kicked linux on the desktop back into gear, but there
were a lean few years there. They shipped a debian derivative (like knoppix)
hired a couple full-time developers to de-constipate debian because allowing
their base to collapse was bad PR. (Which unfortunately meant that stupidity
like 2006's https://wiki.ubuntu.com/DashAsBinSh got pushed upstream into debian
becuase they were just so happy to have some full-time developers saving them
from themselves...)
The drought was partly becuase of the dot-com bust, but mostly because Red Hat
abandoned the shrinkwrap Linux market when the new management they brought in to
handle their IPO in y2k explained Sun Microsystems' business model to the old
management: nobody WANTS to use Slowaris when they could use Linux, it's
obviously technically inferior
(https://www.landley.net/history/mirror/linux/kissedagirl.html) but federal and
fortune 500 procurement contracts commonly cap a vendor's profit at a percentage
of the cost of materials, so if they specified a $5k solaris seat they could get
$500 profit but a $29.95 box of Red hat was $2.98 profit. And the red hat guys
went "you mean if we significantly raise the price our sales will go UP?" and
they tried it and thus was born Red Hat Enterprise and they went from something
like $15 million annual revenue to over $100m VERY FAST and their engineers got
YANKED out of the desktop market to serve the enterprise customers because
money, and they left the community in the lurch but did not care because MONEY.
(The slackware maintainer's health problems turned out to be his electric
toothbrush vaporizing plaque bacteria and giving him lung problems when it was
inhaled.)
> Chet
Rob
More information about the Toybox
mailing list