[Toybox] xargs
Rob Landley
rob at landley.net
Thu Nov 14 20:32:07 PST 2019
On 11/14/19 3:15 PM, enh via Toybox wrote:
> okay, finally found time to work out why the new xargs tests fail on
> Android. (as you can probably guess from the patch to differentiate
> the error messages again!)
So the build is working and it was just the test failing?
> it turns out that the problem is to do with the old argument about
> "what's the kernel limit, really?". remember that thread where Linus
> said "I suspect a 128kB sysconf(_SC_ARG_MAX) is the sanest bet"?
> (https://lkml.org/lkml/2017/11/1/946.)
Only if we care about running on kernels before
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b6a2fea39318
which was 12 years ago now, so I don't think we do? Everything since has had a
larger limit.
> bionic made that change, which means it can't pass your two new tests
> (because they require a larger limit than that).
>
> glibc still uses max(rlimit(stack)/4, ARG_MAX) like we used to, so they pass.
>
> musl seems to just use ARG_MAX like bionic, so i assume the two new
> tests are broken on musl too?
>
> i can easily flip bionic back, but since you were the person most in
> favor of this change in the first place, i'm assuming you'd rather
> change the tests? (though i'm not sure what _to_. arithmetic on
> `$(getconf ARG_MAX)`?)
I hate to say it, but I think Linus' suggestion is wrong.
The kernel can run a command with a single argument that's 128k-1 byte long, and
when _SC_ARG_MAX returns 128k xargs can't, and there's no "split the line"
fallback: it just can't do it. That does kind of bother me.
Right now, rlimit(stack)/4 with a ceiling at 6 megs is what the kernel is
actually doing. The 6 megs is from kernel commit da029c11e6b1 adding 3/4
_STK_LIM and _STK_LIM is 8*1024*1024 in include/uapi/linux/resource.h
The 1/4 stack limit was in the original 2007 commit, Kees Cook added a
gratuitous "How about 6 megs! There are no embedded systems with less than 6
megs! I define what 'sane' is!" extra limit that made NO sense and triggered the
above thread with Linus (it's not a fraction of the memory the system has, not a
fraction of available memory, merely a number that Felt Right To Some Guy), but
it hasn't moved for 2 years now and older kernels wouldn't really be negatively
impacted by the 6 megabyte ceiling. And that covers our 7 year time horizon (12>7).
I think the fix here is to change what Bionic returns? If they break it again,
we can change it again. You can't future proof against gratuitous nonsensical
changes coming out of the blue. (A process calling exec() can trigger the OOM
killer, sure. A process filling up its stack can trigger the OOM killer. Why the
special case here?)
That said, I can yank the test if that's easier. I don't want to adjust the test
because the _point_ of the test was to see if we can handle The Largest Possible
Argument, and when we can't it did its job and found an issue. If we want to
accept not handling the largest possible single argument the kernel can take,
the answer is to yank the test, not test something smaller than isn't a real
boundary condition.
Rob
More information about the Toybox
mailing list