[Toybox] patch: add built-in versions of sha-2 family hash functions
rob at landley.net
Wed Jun 9 00:11:24 PDT 2021
On 6/7/21 11:14 AM, enh wrote:
> On Fri, Jun 4, 2021 at 10:54 PM Rob Landley <rob at landley.net
> <mailto:rob at landley.net>> wrote:
> I've always been slightly unclear on what getty.c _does_ and why it's separate
> from login.c. (Is it related to stty?)
> in case i never sent the response to the list, TL;DR "yeah"...
> [i'm told that] if you're using real serial ports, you still need the baud rate
> setting features. if you're using real serial cables in an electrically noisy
> environment, you have another local getty patch that i honestly haven't
> understood well enough to even try to work out whether it makes any sense to
> upstream :-(
I fairly regularly use stty with actual serial ports, so I'm familiar with that
part. That's why I brought up "maybe this still useful bit is related"...
The local getty patch is probably reading and ignoring data that comes in
immediately after bringing the serial port up and changing the speed? (Because
you can get a little static when the hardware powers on, and especially when you
plug/unplug actual serial cables. A common hack back in the day was to wait for
DTR to assert and just drop data that comes in when it isn't.)
What I don't understand is why it isn't stty+login. Or if you're going to build
this functionality _into_ something, why it isn't directly in login? A glue tool
like getty sending out /etc/issue is just weird.
Long long ago modems dialed in at various rates and actually set the serial port
to the rate of each call rather than keeping it fixed at the highest supported
one and buffering the data. (Which is odd when you have cts/rts and such? It has
to buffer anyway? But they wanted cheap...)
But I had a 1200 baud modem on the commodore 64 that kept the serial connection
at 1200bps when connecting at 300. (In part because it had no way to signal the
C64 what speed it the other end had connected at.)
By the time v.42bis was adopted in 1990
(https://ieeexplore.ieee.org/document/403565) you HAD to set the serial port to
a fixed rate faster than the hardware could actually GO (data compression!) and
keep it there, and then the modem would buffer and handle the flow control for
the actual data going through. (And before the ccitt standards the Hayes HST
modems were doing something similar with a different wire protocol.) And this
was backwards compatible all the way back to 300 baud (the new modems spoke the
old protocols, but in the buffered way), so you haven't NEEDed what getty does
even for "modem-over-actual-land-phone-line" for 31 years now?
> aiui they were sometimes seeing XOFF sent to init, causing boot to hang.
I always disable xon/xoff flow control. In-band signaling for flow control is
just sad. (The Joe editor on SunOS had to handle the fact that "ctrl-K Ctrl-S"
to save a cut and paste buffer was a common wordstar key binding, but Ctrl-S
stopped terminal output until it got a Ctrl-Q. Wordstar originated on CP/M and
took over the DOS world until Windows monopoly leveraged it out, and Unix wanted
to keep the ASR-33 teletype happy because it was durable enough and simple
enough to take apart and clean/oil to be available cheap secondhand...)
The Apple II and Vic 20 had builtin keyboards and video display output, and IBM
did an Ascii keyboard on a mass produced machine capable of running unix (hence
xenix) in 1981 and had finalized the layout by the PC/AT in 1984. The Stanford
University Network boxes that Sun commercialized had keyboards and displays. All
the AIX hardware did too. x11 standardized GUI plumbing in 1987 (which made the
NeWS guys with their competing postcript-based implementation sad; they went on
to create Java at Sun, http://www.blinkenlights.com/classiccmp/javaorigin.html).
I look askance at the continued default deployment any technology that last had
good reason to exist before the birth of a sitting US senator.
> (although i understand that XOFF/XON is useful in theory, i've been disabling it
> since the early 1990s because i've haven't deliberately used it since the early
> 1980s when computers were still slow enough for human reaction times to be
> somewhat meaningful there.)
In theory this is what the "scroll lock" key was for on the 1981 IBM PC
keyboard. In practice text output by programs running locally scrolled by too
fast for it to matter, and third party modem control programs didn't reliably
wire it up to rts/cts. (Because it didn't start with an LED so you couldn't tell
when it was pressed, and leaving scroll lock on and then calling tech support
because you got no output was most cost effectively fixed by disabling scroll
> It'd be great if somebody could tackle stdbuf,...
> (i thought we'd already argued that this means you'll have the glibc-provided
> binary if you're using glibc, which means this doesn't make sense for toybox?)
Good point. It was added to the roadmap at the request of the tizen people ages
ago, but I'm happy to yank it out again now that I know more about it.
I have no idea what the status of Tizen is, I hear conflicting things:
> Rich Felker said he had a simple way to do it, but we've never sat down to have
> him explain it to me.
> (i'd be curious to hear that, because every implementation i'm aware of -- and
> the Unicode standard --literally end up with a huge table *and* hard-coded
> special cases in the code. the closest i've come to "clever" with this was to
> hoist the hard-coded special cases out and have a separate "easy case" copy of
> the loop. but that's only a run-time "simplification", and makes the
> implementation strictly larger.)
> but more than that, i'd still like to hear an argument that trying to be clever
> here makes any _sense_ :-)
I did cc: Rich on the email. :)
> i'd ask for a single real-world example where someone's actually using this, but
> since BSD and GNU and Plan 9 trs don't, that doesn't exist.
> (and this ignores the question of "sure, but aren't we going to harm more
> ASCII-only 'kernel build' users by accidentally taking their locale into account
> than we are going to help imaginary Turkish AT&T lawyers still using the Unix
> command line for writing their patent applications in 2021", to which i'm pretty
> sure the answer is "yes, the only net result of implementing this would be that
> we'd need to tell a bunch of people to set their locale to "C" for their
> fwiw, getting back to something you said earlier, i think *this* is where one
> true awk "doesn't support utf-8" --- "convert Turkish input to upper/lowercase"
> _ought_ to be something that awk can do that tr can't (because tr is all about
> characters/bytes, but awk is all about strings), but one true awk can't do it
> either. perl and python can. realistically i think anyone who falls into the
> "no, i really do want to deal with all the weirdness of human scripts [in the
> Cyrillic/Hangeul sense of the word]" category (a) should use and (b) is already
> using python anyway. even the kernel and toybox use perl or C where awk would
> do. "it's POSIX", sure, but "no-one who wasn't doing this kind of thing in the
> 1990s has ever used it, and those of us who were don't want to write things that
> only we can maintain".
I basically want to hear Rich's idea for utf8 support in tr before doing the
> your non-POSIX cut(1) extension covers 80% of the in-the-wild use of awk anyway
> :-) if you still talk to any of the busybox folks, we should suggest they copy
I hath poked. If they dowanna but would be interested in merging an external
contribution, I can probably whip up a patch...
> --- it would be nice for it to be a de facto standard so we can get it into
> POSIX sometime around the 2040s... (and have made lives better for the folks who
> don't care about standards and just want to "get things done" in the intervening
More information about the Toybox