[Aboriginal] Fwd: Re: Fwd: Aboriginal. Wow! and Thanks!

Rob Landley rob at landley.net
Sat Jun 25 15:54:45 PDT 2011



-------- Original Message --------
Subject: Re: Fwd: Aboriginal. Wow! and Thanks!
Date: Sat, 25 Jun 2011 11:38:07 -0500
From: Rob Landley <rob at landley.net>
To: Paul Kramer <kramerica at me.com>

On 06/24/2011 11:28 PM, Paul Kramer wrote:
> Rob -- beside solving the autotools, make problem... here are the
> other things I'm interested in
> 
> * packaging (RPM, APT... mediocre at best, I want to do something
> that makes sense in 2010's)

I'm sure I've ranted about this before, but a quick check of my blog
isn't finding it.

If you read that aboriginal project history file, you'll notice that I
have a buildroot rant.  Became an accidental distro.  Then OpenEmbedded
happened, when some buildroot developers decided to start over because
they could do better, and they invented yet another packaging tool
called "bitbake" which eventually forked off and became its own project,
with openembedded being a bitbake repository.  Literally, openembedded
is a bunch of bitbake build description files, hundreds of them, for
various packages.

The "ipkg" format started like this.  Portage started like this.
Slackware has been using "tarballs with metadata" for decades.

When you can name a half-dozen packaging tools off the top of your head,
each with one or more associated repositories of the 43,000 packages
Debian had in its repo last time I bothered to check (and that was a
while ago, I'm sure it's more now)... It's not a simple problem.

> * for every web application they we put up on the web (defect
> tracking, wiki, build server... anything)... I want everything to be
> accessible via the command line and the web gui. once it's available
> thru the CLI, then folks can hook into their editors, script,
> whatever they want

Cool.  How does security work into this?  (Personally I just tunnel
everything through ssh.  Once upon a time I created an ssh-based vpn,
some of which might still be up at http://dvpn.sf.net/old or something.)

> * defect tracking system... instead of a bugzilla for example... I
> want a tool kit that can be used to construct a defect tracking
> system, and provide from reference implementations....

I plead incompetence here.  I've never been good with bug tracking
systems.  It was one of the big sources of conflict in the busybox
development thing, they had mantis and then the site got hacked through
mantis and they switched to a bugzilla I've never used because the OSDL
guys would admin it security-wise.  Mark set up redmine for impactlinux
and I spent a weekend filling it with info and then never looked at it
again...

> * continuous integration... as practiced and implemented by the
> majority is wrong... again this should be a tool kit with reference
> implementations

Ok, "continuous integration" sounds great but I want to hurt people when
it comes up.  First you're doing git bisect up front which is great if
your test suite means anything, but any _real_ project will  hit more
bugs in the field than during development, so you still have to cope and
all this ivory tower BS about your process somehow helping matters is
just silly.

What you REALLY need are "release early, release often" as described in
this MARVELOUS talk from an ex-debian maintainer:

  http://video.google.com/videoplay?docid=-5503858974016723264

Honestly, go watch it.  THAT is what continuous integration is trying to
achieve, but instead its used as an excuse NOT to have releases by
projects like eglibc, which renders those projects of zero interest to me.

> * wikis ... wrong... i want people be able to work in VIM or Emacs
> when working not only with the wiki but any engineering web app

Wikis came up at my 2008 OLS talk on "where documentation hides".
They're a great place to archive a slush heap of ideas, but they don't
address editing at all.  Wikipedia has no index.  It has no plot.  There
IS no coherent order to read it in, and there's a lot of duplication
between articles, and this is inherent in it being a wiki.

> * whatever web technology I use, i want to be as close to metal as
> possible... the problem with the applications developers is that they
> are framework crazy.... next you know... they go overboard like the
> Java community and turn hello world into 5,000 lines of code...

That's actually a flaw in the Java language:

  http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html

It's part of it being caught between fully static (C) and fully dynamic
(python, ruby, lua, and other scripting languages).  They combine static
typing with dynamic memory management, and try to work around it with
interfaces, and it fails into massive code generation and bloat.

At least it's not C++.

> sound
> crazy... There is a popular maven build tool... and if we wrote a
> hello world program in Java, and then ran maven... it would
> download... what seems like hundreds of packages off the net to get
> all the dependencies and plugins for the maven build tool and
> whatever is need by hello world... it's insanity....

Yup.  Environmental dependencies have a cost.

I covered that in "The Art of Unix Programming".  (In theory I am not a
co-author, I just edited that book.  In practice it was 9 chapters when
I started arguing with Eric, and something like 20 when we stopped arguing.)

Rob

> Begin forwarded message:
> 
>> From: Paul Kramer <kramerica at me.com> Date: June 24, 2011 9:17:04 PM
>> PDT To: Rob Landley <rob at landley.net> Subject: Re: Aboriginal. Wow!
>> and Thanks!
>> 
>> 
>> On Jun 19, 2011, at 1:28 PM, Rob Landley wrote:
>>> 
>>> I'm a longstanding "jack of all trades, master of none".
>>> Currently I'm watching a youtube video of Jeri Ellsworth
>>> interviewing Bill Herd about Phase Locked loops.  (I'm not a
>>> hardware guy, so I haven't got the background to properly follow
>>> this, but it's fascinating...)
>> 
>> me too. people ask me what are your strengths, expecting me to say
>> something technical and i tell them, problem solving SDLC issues
>> and workflow issues, making stuff that should be easy, easy.
>> 
>>>> 
>>>> 
>>>> * redis * mongodb (perfect for engineering disparate data) *
>>>> sinatra (very thin web framework) * node.js (this will freak
>>>> you out... server side javascript) --- seriously this is
>>>> something to pay attention to... it's super liteweight evented
>>>> i/o * but wait... coffee-script... a blend of python and ruby
>>>> that produces javascript) * and of course ruby (and btw the
>>>> way... the ruby folks are all into redis, mongodb, sinatra
>>>> node.js) * ruby DSLs... because of the flexible syntax and
>>>> language features its great for this...
>>> 
>>> I've heard of some of this but never used any of it.
>> 
>> you know... for folks that are coding down at the bare metal, they 
>> usually don't have much use for the web stuff... at least my
>> friends that are firmware engineers.
>> 
>> the reason I'm into this stuff, is that when setting up
>> environments for teams there is going to be a bunch data generated
>> from builds, tests, and all the other stuff going on related to
>> priorities, decisions, how-to stuff...
>> 
>> and some of this technology can help pull that data together and 
>> present in multiple formats.
>> 
>> for the longest time, most of the engineers tools have been
>> disparate command line utils. then stuff start moving to the web in
>> the mid-90's.
>> 
>> know the landscape is littered with fat monolithic java web apps...
>> for static analysis, defect tracking, wikis, build and test
>> automation
>> 
>> it's all SUPER overkill
>> 
>> what I've wanted to do for a long time is write the utils so we
>> can get the data from the web UI or the CLI...
>> 
>> until recently that was painful to implement, now it's getting
>> much thinner... folks are finally starting to understand, not
>> everything has to be a web app backed by mysql
>>> 
>>>> although there is a bit of misuse... * github * resque (screw
>>>> hudson and all those java heavyweight narrow minded continous
>>>> integration bullshit... anyone that has a clue has a killer
>>>> pre-integration setup)
>>> 
>>> Continuous integration means never having to cut a release.  I
>>> have a whole rant about why that's a horrible idea.  Releases are
>>> GOOD things.
>> 
>> Ahhh... my pet peeve too. Slightly different angle. When I get an
>> environment setup and we've got a best-practices down... I setup so
>> that every build is a release-candidate. Now it my not meet entry
>> criteria, but mechanically it's a release-candidate.
>> 
>> So alot about continuous integration is how people use it... For
>> example when I setup it up for a compiler team... I said all these
>> builds/tests we do during the day are just to help us understand
>> where we are before the nitely runs...
>> 
>> just indicators. i
>> 
>> But... On the better teams I get to work with... the sweet spot is
>> pre-integration. So I'll setup environments so that folks can farm
>> off their builds to run locally on their machine or on a build
>> farm. I leave it up to the developer to do as much or as little as
>> they want, but I have the environment in place that will ensure
>> goodness when they check in.
>> 
>> I've been lucky in that I've worked on teams that have used
>> distributed source code control tools all thru the 90's... Suns
>> Teamware/Codemgr. Larry McVoy wrote the original in perl during the
>> early Solaris development 1991/92
>> 
>>> 
>>>> * and finally RESTful APIs --- no folks have been doing this 
>>>> awhile... but really sloppy...
>>> 
>>> Sounds like a bad marketing term.
>> 
>> HAHAHA.
>> 
>> basically it is this... the URL has semantic meaning, and the
>> response you get back from the server has the business logic in
>> additional URLs the client can use or discard... thats an
>> oversimplification....
>> 
>> another kind-a-sort-a way to think about it... when unix/linux
>> people work, they basically have the file system in their head, and
>> the paths have a meaning and purpose to what is contained in the
>> directory
>> 
>> so a url like http://fubar.com/users/joeblow/edit  means we'll be
>> editing the joeblow record in the users collection...
>> 
>> 
>>> 
>>>> there is an aspect called hypermedia that folks are not using
>>>> and it's very powerfull...
>>> 
>>> Also sounds like a bad marketing term.
>> 
>> HAHAHA yeah... So all that means if ... just like the url above 
>> that the client (could be command line or some gui) provides, the
>> server will return data in say... XML or JSON format, and along
>> with that it will return other URLs the client can use to further
>> operate on the users collections. So the client does not have to
>> know the URLs... the server ... web service provides those to the
>> client. This is goodness as it is loosely coupled... the server can
>> change implementation, and the client is unaffected, because it is
>> providing the api desitination
>>> 
>>>> simply your web request not only provides you with your request
>>>> data... it also provides you with what routes/actions to take
>>>> next... this is a thing of beauty... why... create little web
>>>> services without a UI... and then anyone can create CLI or
>>>> javascript widgest to interact with the web services...
>>> 
>>> Because every time you run a command in a command line it tries
>>> to guess what you want to do next?
>> 
>> Basically the logic is all predetermine and served up by the web
>> service.
>> 
>>> 
>>>> huge... imagine you are working on a project that build 6
>>>> variants of your codebase... you commit in git... and trigger 
>>>> than launches a build on 6 remote serves to crank out all the 
>>>> varaints..... think resque, RESTful API... huge potention..
>>>> with git we can share code easily... what is we could share
>>>> everything easily? we need a service to do that... RESTful over
>>>> http
>>> 
>>> So it assumes you have a half-dozen servers lying idle and you'd
>>> like to speculatively tie them up every time you breathe?
>> 
>> I'm over simplifying... but lets say you have a cronjob that wakes
>> up every so often, and if something has changed, it performs some
>> action...
>> 
>> these web servers are super thin.... you could launch one and it
>> might observe your workspace and when you commit a change it
>> clones your workspace and does an opt and debug build... or farms
>> that off to another machine
>> 
>> for an individual this may or may not be useful, bit in a team of 
>> folks it may... especially if the change was bug that had to be
>> propogated into many lines of development, you could have web
>> service that grab the bug fix and spin builds, runs test
>>> 
>>> This is not an environment I normally work in.
>> 
>> if you have time, it would be cool if you could explain your
>> preferred work environment, tools, workflows and stuff that drive
>> you nuts
>> 
>>> 
>>> Rob
>> 
> 
> 



More information about the Aboriginal mailing list