[Toybox] FYI musl's support horizon.
enh
enh at google.com
Thu Aug 26 15:56:47 PDT 2021
On Thu, Aug 26, 2021 at 2:34 PM Rob Landley <rob at landley.net> wrote:
> On 8/24/21 6:27 PM, enh wrote:
> > yeah, i think he (and i, when i have my bionic hat on) have an easier
> problem
> > than you ... this kind of thing isn't too hard in the c library because
> it's
> > generally considered okay to just shrug and return -1 with errno ==
> ENOSYS or
> > whatever.
> >
> > whereas as an "app", your users expect you to do the thing. (which at
> best tends
> > to mean you have an untested/barely tested "success" case. no-one
> worries that
> > their return -1 has bitrotted, but the non-inotify path or the
> non-O_TMPFILE or
> > whatever ... that's a lot more likely!)
>
> I keep telling people I could spend a focused year on JUST the test suite
> and
> they don't believe me. When people talk about function testing vs
> regression
> testing vs coverage testing I get confused because it's all the same
> thing?
i'll include the main failure modes of each, to preempt any "yes, but"s by
admitting that _of course_ you can write fortran in any language, but the
idea is something like:
integration testing - answers "does my product work for real use cases?".
you definitely want this, for obvious reasons, and since your existing
testing is integration tests, i'll say no more. other than that the failure
mode here is relying only on integration tests and spending a lot more
time/effort debugging failures than you would if you could have caught the
same issue with a unit test.
unit testing - reduces the amount of digging you have to do _when_ your
integration tests fail. (also makes it easier to asan/tsan or whatever,
though this is much more of a problem on large systems than it is for
something like toybox, where everything's small and fast anyway, versus
"30mins in to transcoding this video, we crash" kinds of problem.) for
something like toybox you'd probably be more interested in the ability to
mock stuff out --- your "one day i'll have qemu with a known set of
processes" idea, but done by swapping function pointers. one nice thing
about unit tests is that they're very easily parallelized. on a Xeon
desktop i can run all several thousand bionic unit tests in less than 2s...
whereas obviously "boot a device" (more on the integration test side) takes
a lot longer. the main failure mode here (after "writing good tests is at
least as hard as writing good code", which i'm pretty sure you already
agree with, and might even be one of your _objections_ to unit tests), is
writing over-specific unit tests. rather than writing tests to cover "what
_must_ this do to be correct?" people cover "what does this specific
implementation happen to do right now, including accidental implementation
details?". (i've personally removed thousands of lines of misguided tests
that checked things like "if i pass _two_ invalid parameters to this
function, which one does it report the error about?", where the correct
answer is either "both" or "who cares?", but never "one specific one".)
coverage - tells you where your arse is hanging out the window _before_
your users notice. (i've had personal experiences of tests i've written and
that two other googlers have code reviewed that -- when i finally got the
coverage data -- turned out to be missing important stuff that [i thought]
i'd explicitly written tests for. Android's still working on "real time"
coverage data showing up in code reviews, but "real Google" has been there
for years, and you'd be surprised how many times your tests don't test what
you thought they did.) the main failure mode i've seen here is that you
have to coach people that "90% is great", and that very often chasing the
last few percent is not a good use of time, and in the extreme can make
code worse. ("design for testability" is good, but -- like all things --
you can take it too far.)
> You
> have to test every decision point (including the error paths), you have to
> exercise every codepath (or why have that codepath?) and you have to KEEP
> doing
> it because every distro upgrade is going to break something.
>
yeah, which is why you want all this stuff running in CI, on all the
platforms you care about.
> In my private emails somebody is trying to make the last aboriginal linux
> release work and the old busybox isn't building anymore because makedev()
> used
> to be in #include <sys/types.h> and now it's moved to <sys/sysmacros.h>.
> (Why? I
> dunno. Third base.)
the pain of dealing with that pointless deckchair crap with every glibc
update is one reason why (a) i've vowed never to do that kind of thing
again in bionic [we were guilty of the same crime in the past, even me
personally; the most common example being transitive includes] and (b) i'm
hoping musl will care a bit more about not breaking source compatibility
... but realize he's a bit screwed because code expecting glibc might come
to rely on the assumption that <sys/types.h> *doesn't* contain makedev(),
say --- i've had to deal with that kind of mess myself too. sometimes you
can't win.
> And yes I confirmed that version skew using the Centos
> release that other guy poked me about last week. It has the old one, my
> laptop
> (and man7.org) has the new one, it changed somewhere in between, no idea
> why.
>
> Linus is much better about avoiding this in the kernel, but you still get
> it
> with the /sys directory because Greg KH is SUCH an asshole. And Peter
> Anvin is
> on a mission to add gratuitous build dependencies to every project he
> touches
> for reasons I've never understood despite repeatedly asking him to
> explain...
>
> Rob
> _______________________________________________
> Toybox mailing list
> Toybox at lists.landley.net
> http://lists.landley.net/listinfo.cgi/toybox-landley.net
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.landley.net/pipermail/toybox-landley.net/attachments/20210826/2c855d0d/attachment-0001.htm>
More information about the Toybox
mailing list