[Toybox] tests

Rob Landley rob at landley.net
Wed Jun 17 10:13:25 PDT 2020


> i'm not sure i understand the intent behind this change:
> 
> commit 7f062f2dcfa5511139476e1aef8db74f49566432
> Author: Rob Landley <rob at landley.net>
> Date:   Tue Nov 20 17:50:05 2018 -0600
> 
>     Skip tests that don't have the executable bit set unless $TEST_ALL set.

I blogged about it when it went in:

  https://landley.net/notes-2018.html#20-11-2018

It's a half-assed attempt to create tests/pending.

> i assumed this was to avoid running tests for stuff in pending, but it
> seems like there's quite a random mix of tests not being run
> currently.

Commands promoted out of pending don't necessarily have _tests_ in an equivalent
usable state. There are tests for promoted commands where the tests themselves
haven't been properly cleaned up, and there were a lot of "known failures" in
the test suite at the time.

> even basics like date, test, touch, and xargs aren't having their
> tests run. oversight?

It's a dropped ball. It was intentional at the time, but it's been a while...

> either way, i wonder whether it would make more sense to have `make
> tests` run _all_ the tests and add something like `make quicktests` if
> that was your intention? or have a tests/pending/ directory if _that_
> was your intention?

Eventually all the tests should be cleaned up and promoted. Some glorious day...

> and it wouldn't hurt to modify the `make help` output too, if not
> running all the tests was actually intentional:
> 
>   tests           - Run test suite against all compiled commands.
>                     export TEST_HOST=1 to test host command, VERBOSE=1
>                     to show diff, VERBOSE=fail to stop after first failure.

It doesn't mention VERBOSE=xpect or nopass either, and soon there should be a
way to run it under qemu... I need a FAQ entry explaining the test suite.

My problem is, writing toybox documentation eats 100% of the available time.
Review and cleanup of submissions/pull requests/bug reports eats 100% of the
available time. Expanding test coverage eats 100% of the available time.
Designing a new test suite with root/container stuff for modules and "test ps"
and so on eats 100% of the avilable time. Writing a proper new shell eats 100%
of available time. Getting up to speed on dhcp and dd corner cases and so on
eats 100% of the available time. Writing new rsync/screen/smbd commands eats
100% of the avilable time. Designing a proper systemd replacement init
(https://landley.net/notes-2015.html#03-06-2015 and
https://landley.net/systemd-notes.txt and such) eats 100% of the avilable time.

Doing build systems (turning mkroot into a proper aboriginal replacement) eats
100% of available time. Doing toolchains and build infrastructure (qcc and
pushing mcm-buildall.sh upstream into mcm and puppy eyes at Zach to update the
binaries dir and llvm-cbe bootstrapping and poking the llvm guys to package a
prefix-cc symlink) eats 100% of the available time.

And of course at $DAYJOB J-core community interfacing eats 100% of the available
time (mailing list, two twitter accounts, and a blog I should be updating before
you even get into youtube videos or conference presentations). J-core chip
design eats 100% of the available time. J-core BSP work eats 100% of the
available time. J-core website stuff (documentation, news.html, doing a FAQ
there, updated VHDL and C toolchain builds) stuff eats 100% of the available
time. (And we published our GPS implementation at
https://github.com/j-core/gnss-baseband which eats 200% of the available time.)

Reverse engineering and simplifying AOSP would eat 100% of the available time.
Doing a posix container build environment for android (at this rate
https://www.theverge.com/2020/4/24/21233661/macos-arm-processor-transition-apps-developers-catalyst-wwdc
will unify ios and macos and thus self-host ios development first) would eat
100% of the avilable time.

> (this doesn't affect Android directly because i just run all the tests
> in my runner. but it may explain how i've managed to send you a few
> patches that didn't pass their tests... i only learned this today
> when, out of curiosity, i did a clang coverage run to see what the
> test coverage looks like and was surprised to see apparently missing
> coverage for tests i'd helped write myself!)

I'm all for improving the tests, test coverage, test reliability, and so on. A
PROPER test doesn't just test success it tests EVERY ERROR PATH which is a can
of worms I have not yet opened but is on my todo list. But there's missing
infrastructure (I need regex matches on output and multiple test environments to
check TEST_HOST against: two fedora based (fedora and suse), two debian based
(ubuntu and devuan), two busybox based (alpine and buildroot), two bionic based
(AOSP and static ndk build)... but that's all queued up behind checking in
scripts/root/tests so I can "scripts/mkroot CROSS=all tests"

But I need to upgrade toys/other/timeout.c to add "-o 20" so if the child
process doesn't produce any output for 20 consecutive seconds it gets killed
(the child process in this case being the emulator. If you were wondering why
that output reblocker command was still on my todo list, it's because there's a
related whatsis in the todo list and maybe it would make more sense there
instead? Dunno, I dash off notes recording the problem and worry about designing
a proper solution later...)

Rob


More information about the Toybox mailing list