[Toybox] Android API 29 and custom Toybox upgrades
Rob Landley
rob at landley.net
Thu Sep 12 14:08:17 PDT 2019
On 9/10/19 7:52 PM, enh wrote:
>> Because that's a pretty common pattern even in dedicated build machines (any
>> build that has $HOSTCC, like the linux kernel), and running the binaries you
>> build is pretty much what a development workstation is _for_. (If nothing else,
>> the test part of the "edit/compile/test" cycle.)
>
> not from an untrusted app context, no. from the shell context, yes.
Is there a way to designate an app "trusted"? (Presumably from the secret
developer mode you tap the special stone 10 times to enter?)
>> That said, I haven't got the pieces in place yet, and assumed the permissions
>> would involve a container anyway. (We talked about that in 2016.)
>>
>> On 9/10/19 2:12 PM, enh via Toybox wrote:
>>> the official security answer was given on
>>> https://b.corp.google.com/issues/128554619#comment4
>>
>> Which is a login screen wanting an @google.com account.
>
> hasn't been my day for giving out useful links. try this:
> https://issuetracker.google.com/128554619#comment4
That one also won't show anything without a login, but accepted an external one.
>> The above link mentions ways to get around it that could be hooked up to
>> binfmt_misc as a loader (heck, people could ship qemu and do it that way with
>> non-native binaries), but I'm still wondering about development containers. Is
>> there a way for an app to say "I'm going to put executable crap here, I know
>> it's unsafe" other than sticking to API 28 until you abandon it and then
>> sticking with old android versions? (Which seems like something to avoid
>> encouraging. So does "you need to root your phone to do this"...)
>
> https://android-developers.googleblog.com/2017/12/improving-app-security-and-performance.html
Apps that don't come from the Apple Store can't use the operating system? (How
is this _not_ a walled garden single point of failure? Did Google _mean_ to
paint an antitrust target on itself and alienate the EU and China?)
*shrug* That's politics, not development, so not my area. I just want a
development container within which you _can_ do this stuff. If the security guys
want to wall it off from being able to access the camera and microphone and
touchscreen, fine. (Maybe you can build an APK in there and sideload it onto
your OS to test that stuff out? I don't see how that's less secure than doing
the same thing from a remote machine...)
>>>> If I adb shell and wget new builds
>>>> of Toybox and Dropbear, a) will they run in adb,
>>>
>>> yes. (at least to the extent they have in previous releases.) the
>>> shell uid and selinux domain are unrelated to app uids and untrusted
>>> app selinux domains, which is why a lot of toybox commands work via
>>> `adb shell` but don't if you're doing fork/exec from an app.
>>
>> wget them to where? It sounds like they won't run out of most writeable space?
>> (Where is the home directory of adb shell?)
>
> /.
Which we still can't ls. (You've explained the reasons for it, which is not the
same as me understanding why it's there.)
> but $TMPDIR is /data/local/tmp. (though since that's not accessible
> to apps, i'm not sure there's any explicit CTS guarantee that that
> actually exists.)
It sounds like the 90+ people who've now favorited that tweet (no longer
swamping my notifications but it was an interesting couple days) could benefit
from a posix-ish development container providing a known documented environment
where you can depend on things to actually exist. (At least within a given version.)
>> P.S. Every time somebody says "it's ok, just cross-compile from a big machine
>> and dial in from that to test it", I think of vax administrators circa 1986 and
>> their ironclad surety that the PC wasn't a _real_ computer. A billion units/year
>> means _all_ the R&D money has gone to phone chipsets instead of PC chipsets for
>> the past decade, and then raspberry pi gets built out of phone hardware the way
>> "blade servers" were made out of laptop hardware when that got all the R&D money
>> after 2004, but the _problem_ is even pi sales are measured in low millions, it
>> took them 5 years to reach 14 million units sold, that's 1/1000th of phone
>> volume. The open source studies generally say about 2% of your users are
>> developers (mostly of the drive-by-patch variety, but we all start somewhere),
>> but that's when all of them have the _opportunity_ to be, without barriers to
>> entry like "buy dedicated hardware to even try it out"...
>
> app development is relatively practical. os development, though...
Um, this is kind of my area. Yes, it can be done.
Rebuilding the entire OS has _always_ taken forever, that's why developers don't
usually do that. Building the whole of Fedora's repository on a high-end modern
laptop via koji would take a week. Building SuSE through OBE is at least as
long. Setting up a new server with gentoo is _expected_ to take all day (and
goddess help you if you select Gnome). ~15 years ago I remember a KDE
developer's blog about driving from New York to Florida for vacation: he set a
full KDE rebuild going on his laptop in the passenger seat and made it to
Florida before it had finished.
Yet somehow, open source developers continue to use laptops for development even
today.
Linux is modular, it has packages you can replace individually. You can also
build stuff and run it out of your home directory while testing, even linking
against some locally built shared libraries with -L and LD_LIBRARY_PATH and
such, or just static linking. You can build and test stuff, and THEN stick it in
the OS.
If AOSP isn't designed to allow that, even in "developer mode", this is a design
problem with AOSP. Throwing hardware at the problem rather than changing the
design seems kind of odd to me. Windows users may wipe and reinstall the OS
every hiccup (at least their tech support people do, I used to have a button
that said "Your mouse has moved, you must reboot windows for this change to take
effect"), but that's not the only way operating systems work. Between major
Android releases that's not even how _android_ works. But that's how android's
_build_ seems to work for the base OS, which is why I keep saying I need to
rewrite it.
(Of course a big reason Knoppix took so much longer to compile than everything
else back in the day is it was mostly written in C++, which takes 10-1000 times
as long to compile as C does. No really:
https://www.quora.com/Why-does-C++-take-so-long-to-compile . I remember back in
2007 the uClibc++ maintainer showed me a 2 line C++ program doing recursive
template instantiation that when compiled would continue to write to the .o file
until the disk filled up. On a 64 bit system anyway, the 32 bit system I tried
it on hit ELF segment limits and errored out around 2 gigabytes. He was also
angry that the C++ linker standard had recently changed to to do string matches
in a gratuitously less efficient order, I.E. previously long
"parent.class.child.function" strings had resulted in mangled symbol names in
"function.child.class.parent" order so lookups would fail early when searching
for a symbol rather than going down long common initial strings over and over
before finding a difference at the end. But the Itanium guys hadn't known about
that optimization and wrote a big official-looking Linker ABI standards document
PDF doing it the naieve way, and in the absence of any other standard for it the
C++ committe went "oh what a professional looking document" and everybody had an
incompatible flag day change breaking binary compatibility yet again to match
what Itanium said, and thus a previously common optimization was lost slowing
down all C++ compilers by that much more forevermore. He knew a _lot_ about C++.
Garrett Kajmowicz, we had desks next to each other for a year at Timesys. Last I
heard he was working for Google now, at the Pittsburgh campus...)
> much as i'd like to think that at some point oses will stop growing
> and moore's law can catch up, all these decades and i'm not seeing it
> happening.
Because they haven't prioritized it. Build speed, runtime speed, build size, and
runtime size are all things you can seperately optimize for. And if you don't
care about them, you don't get them.
This is a "never get involved in a land war in asia" level of classic blunder:
thinking that what you don't value and thus don't put any effort into is
A) hard, B) something nobody else is doing.
Knoppix circa 2003 was smaller than contemporary Linux installs because it had
to fit on a CD, and the result performed better than its contemporaries (CD had
slow I/O bandwidth so they added some clever cacheing in squashfs that
eventually made it upstream). I don't remember any packages I cared about
missing for the 2-3 years I used it as my main OS. I started down the path of
serious busybox development to free up ~100 megs of space on Knoppix' 700 meg CD
by removing everything from the base OS the FSF had ever touched.
Another big thing Knoppix did is optimize _install_ speed, as in it booted and
ran on arbitrary hardware without asking a zillion questions before you got to a
desktop. It Just Worked. Anybody else _could_ have done that, but nobody else
_had_ before then they didn't value it, so they didn't put in the work to do it.
When somebody tried, it turned out not to be that hard.
When you care about size, you reduce size. When you don't care about size, bloat
accumulates. Build time is just a different axis of "size" (temporal size). This
is bog standard entropy: dishes pile up in the sink unless you clean them. It's
not a law of nature, it's a thing you do or don't do.
Intel valued price/performance and ignored power consumption, so the P4 had a
giant heat sink that had to be bolted to the case to avoid its weight breaking
the motherboard. ARM valued power consumption/performance and ate the battery
powered device market out from under Intel, which them reversed course to do the
Atom which used 1 watt... in an Intel motherboard using 40 watts. Heck, 20 years
ago Intel wound up with an Arm variant in a legal settlement with DEC
(https://www.intel.com/pressroom/archive/releases/1999/em033199.htm), and
immeidately increased the design's performance at the expense of power
consumption to wind up with a power-inefficient arm chip nobody wanted, and
eventually renamed it xscale and sold it to Marvell
(https://www.theregister.co.uk/2006/06/28/intel_mobile_failure/) who of course
immediately started improving the power efficiency...
When I installed devuan with xfce on this laptop a few months back, installing
chromium doubled the size of the operating system's disk footprint. Which is odd
because konqueror was 100k lines of code and browsed the web fine at the time;
and yes mozilla was millions of lines when konqueror was created, because it
_tried_ to be smaller and simpler. But Google doesn't seem to prioritize that,
they culturally throw hardware at the problem instead. (They care about _speed_,
there were the chrome rending pages faster than a lightning bolt television ads.
And security sandboxing, I still have Scott McCloud's comic about chrome's
initial design. But size or simplicity? Fabrice Bellard, the guy who wrote
tinycc and stopped when he had it building the Linux kernel, recently wrote a
tiny fast javascript engine, see https://news.ycombinator.com/item?id=20411154 .
Has anybody working on Chrome contacted him about it? I would guess no.)
Gentoo takes less time to build than Koji because gentoo devs care how long it
takes to build, and _updating_ gentoo (from source) is a couple hours a week
because it's modular. The packages gentoo guys _really_ dread rebuilding (or at
least publicly complain about all the time) aren't open source projects, they're
corporate developed "source available" packages like chrome and Sun's OpenOffice:
https://www.reddit.com/r/linux/comments/2du5tm/after_19_hours_of_compiling_i_understand_why/
You'll note those two packages are the ones Gentoo's periodically violated its
own policies and provided binary versions of. (And then stop because doing so
defeats the purpose of gentoo, then start again because building them takes
insanely long, rinse repeat...)
https://forums.gentoo.org/viewtopic-t-1076620-start-0.html
Corporations often throw hardware at a problem rather than considering "slow
builds" to be an issue worth dev cyles to address. (Such projects eventually
collapse under their own weight after a decade or two, and consultants like
myself get called in to help replace them, then they bloat them again when we
leave and the cycle repeats. Depends how much rocket fuel they're willing to
pour into the turtle to keep it airborne before they call it and start over.
I've seen that at _so_ many companies over the years...)
This even applies to toybox's build: when I started toybox development I did the
"simple" thing of only implementing "make all". (I think it just did "main.c
lib/*.c toys/*.c" and then dead code elimination dropped the disabled stuff.)
Then I implemented incremental builds and taking advantage of SMP, and
implemented the single command targets, because it got big enough to be slow and
I did enough builds I care how long it took to build. I.E. I put work into
speeding up the build, and I did so after the fact, when I personally started to
care about it. And refactoring what I have now to be smaller and simpler is on
my todo list... :)
Small and simple takes _effort_, but it can be done. And a lot of it is asking
stupid questions like "but _why_ can't AOSP literally use the NDK toolchain as a
prerequisite build package instead of providing its own prebuilt?" and then
rephrasing that question over a period of years while making puppy eyes and
providing bug reports until unnecessary redundancy collapses together. :)
It's entirely possible to do more with less, but the first step is wanting to.
> (sure, i could probably build *and* boot VMS on my phone
> now faster than it can boot its actual os, but no-one would use VMS
> any more.)
In 2010 I got a demo hexagon linux system running xchess and xeyes and multiple
terminal windows and such to fit in 64 megs ram because that's what the board
had. My boss was surprised by how much we could fit in there, with enough room
to spare that we ran a compile in the background during the demo. (Well
Hexagon's a barrel processor that Linux viewed as 6-way SMP, so "demo the
parallelism" was one of the goals.)
At $DAYJOB we're sticking a GPS implementation into an ICE40 that's got 128-256k
of sram and 2 to 8 megs of flash (depending on model). While people _have_ run
Linux in 256k sram (Vitaly Wool's first talk on
https://elinux.org/ELC_2015_Presentations was about that*, the trick was having
the kernel running out of NOR flash and xip cramfs for the binaries, plus he did
it on a nommu system so didn't need to waste memory on page tables; and the
standard https://lwn.net/Articles/251573/ tricks... ha, I forgot pr_info() was
indirectly my fault. Is there a word for "I search for a thing I don't
know/remember and it keeps spitting back my old posts at me?" Other than
"frustrating"?)... Anyway, we're not doing that here because this is a single
dedicated app running fulltime that doesn't need multitasking, filesystem, or
network. But the point is you can fit stuff in surprisingly tiny spaces if you
_try_. Same goes for getting it to build fast, and same goes for avoiding
rebuilding most of it when you only need to replace small pieces of it.
Anyway, when I've said "and then I need to take the AOSP build apart" when I get
to that point in the roadmap (see the 8/22 email you sent me about the bash bug
where I said I need to take AOSP apart and you went "Why?")... I've taken apart
and simplified a _lot_ of build systems over the years. I'm aware there's work
to do here, but it's doable work and it can be done retroactively. A working
build can be simplified quite extensively, especially if you don't dismiss
people who would do the work before they have a chance to participate. (Not me,
you've been very nice to me, but I'm outnumbered 100k-1 by the embedded
community, and there are people out there who are _better_ at it than me. If you
could attract Fabrice Bellard's attention to rewriting AOSP he'd be done in a
month. I'm not especially smart, I'm just _persistent_.)
And of course my reaction to taking apart dozens of _other_ people's builds for
work was to create my _own_ tiny builds, and if I'd been able to Tom Sawyer
somebody else into painting the tinycc+qemu-tcg=qcc fence (almost did once! But
alas,
http://lists.landley.net/pipermail/qcc-landley.net/2017-September/000083.html he
decided to write his OWN DIALECT OF C instead, sigh...) and wrote make for
toybox, I could theoretically have mkroot doing the setup to build LFS in ~500
lines of code. Half of which is still architecture support for different kernel
targets...
Sigh. tl;dr (too late): this can be done. Really.
Rob
* The Linux Foundation continues to suck**, and deleted the account that was
_hosting_ all the 2015 ELC videos, so his talk isn't currently available in
video, but the .
** Did you know the Linux Foundation is NOT a 501c3 nonprofit? It's a 501c6
"trade association", the same kind of legal entity as the Tobacco Institute or
Microsoft's old "Don't Copy that Floppy" sock puppet. They did more damage to
Linux development than Microsoft did _before_ Microsoft joined them. Anyway, I
poked Tim Bird about the videos a few days ago but haven't heard back yet...
More information about the Toybox
mailing list