[Aboriginal] Aboriginal. Wow! and Thanks!

Rob Landley rob at landley.net
Sat Jun 25 16:15:41 PDT 2011


On 06/24/2011 11:17 PM, Paul Kramer wrote:

>>> * redis * mongodb (perfect for engineering disparate data) * sinatra
>>> (very thin web framework) * node.js (this will freak you out...
>>> server side javascript) --- seriously this is something to pay
>>> attention to... it's super liteweight evented i/o * but wait...
>>> coffee-script... a blend of python and ruby that produces
>>> javascript) * and of course ruby (and btw the way... the ruby folks
>>> are all into redis, mongodb, sinatra node.js) * ruby DSLs... because
>>> of the flexible syntax and language features its great for this...
>>
>> I've heard of some of this but never used any of it.
> 
> you know... for folks that are coding down at the bare metal, they
> usually don't have much use for the web stuff... at least my friends
> that are firmware engineers.

Last week I got gigabit ethernet working in a MAC-MAC configuration
(with no PHY on either end), which involved going down to the lab and
having the nice man with the oscilloscope hook the thing up and figure
out that the Netra was talking gigabit but the switch was talking
100baseT, and being able to fix that.

I tend to like having my hand held when it gets down to the hardware
level, because if I screw up badly enough you can't restore from backup.
 (And when I'm funding my own experiments, even blowing out a $20 part
often puts an end to that line of equiry.)

> the reason I'm into this stuff, is that when setting up environments for teams 
> there is going to be a bunch data generated from builds, tests, and
> all the other stuff going on related to priorities, decisions, how-to stuff...
> 
> and some of this technology can help pull that data together and 
> present in multiple formats.
> 
> for the longest time, most of the engineers tools have been disparate
> command line utils. then stuff start moving to the web in the mid-90's.
> 
> know the landscape is littered with fat monolithic java web apps... for
> static analysis, defect tracking, wikis, build and test automation
> 
> it's all SUPER overkill
> 
> what I've wanted to do for a long time is write the utils so we can
> get the data from the web UI or the CLI... 
> 
> until recently that was painful to implement, now it's getting much
> thinner... folks are finally starting to understand, not everything
> has to be a web app backed by mysql

My first programming environment was commodore 64 basic, later compiled
with "Blitz!".  (Apparently the exclamation point was part of the name.)
 That had a simple line-number-based text editing API built into the
ROM, with this strange line continuation thing (if you typed off the end
of a line it inserted a new line, moving the screen contents below it
down if necessary, and then _remembered_ that new line was part of the
previous one.  It would only do this once per line, and if you did it at
the bottom of the screen and then hit backspace you'd trigger a bug that
would print out an error message and lock up the keyboard.)

My first C programming environment was Turbo C for DOS.  (Later upgraded
to Borland C++.  Both copies a friend gave me.  And a loaned "how to
program in Turbo C" book by Herbert Schildt that I still have, the
definition of loan there is a bit shaky.  I was a teenager with no
money, of course, went over to a friend's house and used his computers.)

The sad part is I've never found an IDE on Linux that's as good as Turbo
C for DOS was.  Everybody keeps going "just use Emacs, if you learn to
program in Lithp you can turn it into your perfect IDE with less than
three years of full-time work!"  I used microemacs on the amiga, I got
it out of my system, I don't _like_ vi but it's _there_.  When I
introduce new people to this stuff I just give 'em mousepad (xfce's
notepad equivalent) but that doesn't work in a terminal window or
without using the mouse to copy and paste.

I used qedit for years on DOS because it ahd the same key bindings as
turbo C (all based on wordstar, apparently) but joe on the sun
workstations at Rutgers was a buggy piece of crap (when it wasn't
dumping core it was failing to prevent ctrl-k-ctrl-s from suspending all
console output), so I forced myself to learn enough VI to get along.
Doesn't mean I like it...

>>> although there is a bit of misuse... * github * resque (screw hudson
>>> and all those java heavyweight narrow minded continous integration
>>> bullshit... anyone that has a clue has a killer pre-integration
>>> setup)
>>
>> Continuous integration means never having to cut a release.  I have a
>> whole rant about why that's a horrible idea.  Releases are GOOD things.
> 
> Ahhh... my pet peeve too. Slightly different angle. When I get an environment
> setup and we've got a best-practices down... I setup so that every build
> is a release-candidate. Now it my not meet entry criteria, but mechanically
> it's a release-candidate. 

Every build should compile and run so as not to screw up "git bisect",
but calling halfway through a re-engineering of how somethign works a
release candidate is disingenous.  Unless you force yourself to check in
a month's work as One Big Lump, which is bad for different reasons.

> So alot about continuous integration is how people use it... For example
> when I setup it up for a compiler team... I said all these builds/tests we do
> during the day are just to help us understand where we are before the nitely runs...

Checking in something you haven't tested is silly.  Setting up a
bureaucratic policy where we think running the test suite against every
checkin actually MEANS something is not necessarily less silly.  (Memo:
the _interesting_ failures are the ones that force you to add something
new to your test suite.)

> just indicators. i
> 
> But... On the better teams I get to work with... the sweet spot is pre-integration. 
> So I'll setup environments so that folks can farm off their builds to run locally
> on their machine or on a build farm. I leave it up to the developer to do as
> much or as little as they want, but I have the environment in place that will
> ensure goodness when they check in.
> 
> I've been lucky in that I've worked on teams that have used distributed 
> source code control tools all thru the 90's... Suns Teamware/Codemgr.
> Larry McVoy wrote the original in perl during the early Solaris development 1991/92

Sun being used as a _good_ example of something.

I need a moment...

I have two todo items:

1) Make a cron job that builds all targets for every commit and runs the
smoketest scripts on all of 'em.

2) Come up with an expanded test suite that tests every config knob and
knows what to expect.

I note that #2 is a lot more valuable than #1.

>>
>>> * and finally RESTful APIs --- no folks have been doing this
>>> awhile... but really sloppy...
>>
>> Sounds like a bad marketing term.
> 
> HAHAHA. 
> 
> basically it is this... the URL has semantic meaning, and the response you
> get back from the server has the business logic in additional URLs the 
> client can use or discard... thats an oversimplification.... 
> 
> another kind-a-sort-a way to think about it... when unix/linux people
> work, they basically have the file system in their head, and the paths have a meaning
> and purpose to what is contained in the directory
> 
> so a url like http://fubar.com/users/joeblow/edit  means we'll be editing
> the joeblow record in the users collection...
> 
> 
>>
>>> there is an aspect called hypermedia
>>> that folks are not using and it's very powerfull...
>>
>> Also sounds like a bad marketing term.
> 
> HAHAHA yeah... So all that means if ... just like the url above
> that the client (could be command line or some gui) provides,
> the server will return data in say... XML or JSON format, and 
> along with that it will return other URLs the client can use 
> to further operate on the users collections. So the client
> does not have to know the URLs... the server ... web service
> provides those to the client. This is goodness as it is loosely
> coupled... the server can change implementation, and the client
> is unaffected, because it is providing the api desitination
>>
>>> simply your web
>>> request not only provides you with your request data... it also
>>> provides you with what routes/actions to take next... this is a thing
>>> of beauty... why... create little web services without a UI... and
>>> then anyone can create CLI or javascript widgest to interact with the
>>> web services...
>>
>> Because every time you run a command in a command line it tries to guess
>> what you want to do next?
> 
> Basically the logic is all predetermine and served up by the web service.
> 
>>
>>> huge... imagine you are working on a project that
>>> build 6 variants of your codebase... you commit in git... and trigger
>>> than launches a build on 6 remote serves to crank out all the
>>> varaints..... think resque, RESTful API... huge potention.. with git
>>> we can share code easily... what is we could share everything easily?
>>> we need a service to do that... RESTful over http
>>
>> So it assumes you have a half-dozen servers lying idle and you'd like to
>> speculatively tie them up every time you breathe?
> 
> I'm over simplifying... but lets say you have a cronjob that wakes up
> every so often, and if something has changed, it performs some action...

I do.  It pulls the busybox, linux, uClibc, and qemu repositories and
tries each one against the current build scripts.  (bits of it are in
more/cronjob.sh).  It breaks too often to be useful.

> these web servers are super thin.... you could launch one and it might
> observe your workspace and when you commit a change it clones
> your workspace and does an opt and debug build... or farms that off
> to another machine


It runs on the machine upstairs, it's a quad processor machine with 8
gigs of ram and it takes half a day to complete the full set of builds
on all architectures.  (And I haven't added config knob tests yet.)

I note that when I get going, I can do a half-dozen checkins an hour...

> for an individual this may or may not be useful, bit in a team of 
> folks it may... especially if the change was bug that had to 
> be propogated into many lines of development, you could 
> have web service that grab the bug fix and spin builds, runs test

I'm all for improved testing and optimization, but I've seen too many
_attempts_ to get too excited about any of them.  They have an uphill
battle to prove their worth to me.

>> This is not an environment I normally work in.
> 
> if you have time, it would be cool if you could explain your preferred work
> environment, tools, workflows and stuff that drive you nuts

I wouldn't say "preferred", just where I stopped fighting it.  I just
try to get the work environment out of my way.

I take a laptop everywhere, to coffee shops and restaurants and stuff.
(If I try to work at home I get mobbed by cats.)  I switched to laptops
as my primary dev environment about 10 years ago, and never looked back.
 Currently using my shiny new Acer Aspire One D255E-1802, with the
memory upgraded to 2 gigs.  Internet through my phone (nexus one hooked
to t-mobile, USB tethering).  I have wireless at home but have never
managed to get networkmangler to work with it.  I can connect to the
wireless if I kill networkmangler and do iwconfig and ifconfig by hand,
but it's too much work unless I have a really big file to transfer.
Last time I did it was to cut the 1.0.2 release.)

I edit code in vim because it's there.  Yes, I'm aware that to fix it
you have to "ln -sf vimrc /etc/vim/vimrc.tiny" as part of your system
setup.  And I wrote a toy to take the darn periodic sync() out as an
LD_PRELOAD.  But any machine I set down at, it's there.  Any distro I
switch to: it's there.  It sucks, but it sucks ubiquitously.

I compile stuff from the command line.  I tend to have lots of open
terminal tabs, and multiple terminal windows, on multiple desktops.

I check stuff into mercurial given a chance, because git's user
interface is designed to piss off everybody who didn't implement it.
Git has a horrible UI.  Yes I have to cope with it on a regular basis,
but DUDE.

I used to use gnome, because I'd never tried KDE.  Then Kubuntu 4.0 was
so UTTERLY HORRIBLE that I went back to gnome for six months (it was
about as bad, bloated overcomplciated crap with way too many layers and
I spent all my time fighting the automation, plus the actual
responsiveness was like ging from chrom to firefox: latency, lag, and
bloat, oh my...).

Then I found xfce/xubuntu.  I like it.  It's simple, minimal, and it
stays out of my way.  I'm not recommending it to anybody, I'm using the
third most popular desktop on the third most popular operating system, I
am a rounding error.  i don't care.

I'm using thunderbird for email.  I hate it, the sucker is a single
threaded program that blocks on network access (which it starts
asynchronously all the time) and I can't TYPE for up to 30 seconds while
it goes off and diddles its' files.  I was previously using kmail, but
as with Konqueror it was tied to KDE and went down with the ship.
(Luckily Apple peeled KHTML engine out of konqueror and turned it into
Webkit, which is behind safari and chrome.  Konqueror itself is still
bundled with KDE, and so is dead to me.)

I am not fond of most tools.  I do not generally recommend them.  When I
do get happy with one, future releases tend to screw it up.  But you
work with what you've got...

Rob



More information about the Aboriginal mailing list