Newer
Older
divorce-your-linux-admin / divorce_your_linux_admin.rst
@tundra tundra on 16 Dec 2017 9 KB formatting
Divorce Your Linux Admin
========================

*Package Management For Lusers*

If you run Linux on your own machines, you're used to having ``root``
and doing what you jolly well like.  But, if you've ever spent more
than about 10 minutes in a large coporate IT environment, you learn
pretty quickly that ``root`` is hard to get, it takes a ton of
paperwork to get anything done, and you usually have to wait forever.
I've actually had the experience of waiting for 6 weeks to get
permission to install as symlink ... *and I had* ``root``!

There is a good reason for this, of course.  Security threats are very
real, lawsuits are ominipresent, and the Geniuses In Charge (tm) are
writing regulation and audit compliance rules that make root canals
seem like fun.  Information Security people may feel like they are the
IRS of the business, but they perform an important and necessary task:
Saying "No".

So ... is there a better way?  Is there a way to eliminate the
requirement for ``root`` in most day-to-day things we need to do as
users and developers.  Is there a way we can comply with the required
corporte security constraints, but still run our own happy show?  The
answer is a qualified "Yes".

Some things do- and always will need ``root``: Managing devices,
storage, ulimits, and security configuration leap to mind.  But, say,
all you want is a newer version of ``java`` on your servers.  Or
suppose you want a package that isn't part of your standard OS load.
``vi`` is everywhere, but suppose you want to use ``emacs`` instead
(as you should).

You could, of course, download the source for the programs you want,
configure and compile them, and run them, say, out of your home
directory.  Oops ... standard IT corporate security practice is to
never allow a compiler to exist on a production host.  There are ways
around this, but it's fairly painful to have to do that for every
single package you may want.  (If you don't think so, I encourage you
to try and bootstrap the ``gcc`` compiler chain from scratch.  It's a
ton of fun.  No, really, it is ...)

Wouldn't it be nice if we could implement package management in
userland in a way that is repeatable, can be automated, and gives us
control of our own universe without having to beg for ``root`` changes
or have to wait for the vendor to release a new package.  Well,
Sparky, we have the techology to do just that.

It's worth mentioning that the approach outlined below is especially
handy with cloud and on-demand computing.  It makes automating your
deploys pretty simple.  It's also actually pretty handy on your own
machines when you *do* have ``root``.  The less you use superuser, the
less chance you'll screw something up.

.. WARNING:: What follows has been implemented on an experimental
             basis.  It's been tested in only a very limited number
             of systems but seems to work well.  However, you should
             do your own detailed testing before deploying this into
             a production environment.  Failure to do so may result
             in broken systems, hallway snikering, hives, and being
             transferred to your new development shop in Adak, AK.


MacOS Rescues Linux
-------------------

The approach we're going to describe got started in the Mac OSX world.
Back when Apple finally came to their senses, and switched their OS to
a Unix-base (FreeBSD 4.4), they only partly implemented the shell
tools everyone had come to know and love.  The ``brew`` project got
spun up to allow any OSX user to install the command line applications
they knew and loved from Unix.  ``brew`` is essentially a userland
package management system which can be run and modified without superuser
power.  Many of the ``brew`` packages (these days, perhaps all, I haven't
checked) actually download a pre-compiled version under ``/usr/local``.

This ended up being pretty popular with advanced Mac users.  So much
so, that a derivative project, ``linuxbrew``, got spun up to take the
Mac stuff, but apply it to Linux.  That is, give the Linux user
userland package management system.  It too, has found success among
the Linux literati.

But ... there is a fly in the ointment.  When I first undertook this
project, I thought I could just pick a directory on Linux machine and
use ``linuxbrew`` to install what I wanted.  *No habla Senor Frog*.
Many Linux binaries are sensitive to where they are installed, where
they can find their supporting libraries and a host of other things.
So, if I install a binary with ``linuxbrew`` somewhere other than the
default ``/home/linuxbrew``, it's likely not going to work.  And I
*wanted* that to work.  I wanted to have a way of creating a tools
tree wherever I jolly well felt like putting it.

"So", sez me, "I'll just use ``linuxbrew`` to automate the download,
configuration, compliation, and installation of all the packages."
i.e. "I'll automate the build from source."  (That roaring laughing
you hear is coming from every Linux engineer who ever tried something
like this.)

I will spare you sensitive readers the subsequent cursing, whining,
begging, crying and caterwauling that ensured.  Let's just say that
making a position-dependent package management system work in a
position-independent way is ... er, non-trivial.  In fairness, it's
not the fault of the ``linuxbrew`` people.  They were super supportive
and helpful with all this.  A lot of the issues have to do with the
packages themselves having embedded assumptions about where they can
find tools during the compilation phase.  That's right, the *source
code and configurations* have hardwired assumptions about where they
would find things like ``perl`` and ``make``.

At this point, the whole process had taken me a few dozen hours and
I was sufficiently enraged that I just *had* to figure out.  As we'll
see shortly, I think I finally go there.  But, in the mean time ...

.. Note:: If you write software, config files, makefiles, test cases,
          or any part of the software delivery ecosystem *with hardwired
          paths to things emebedded in them*, you are officially a
          big bozo.  Not the fun kind with a red nose and big shoes either.
          The *only* hardwired path that's
          OK is ``/bin/sh`` on a shebang line.  But if you do things
          like this:

              ``#!/usr/bin/python``

          You should be sent to work 1st level phone support on the
          midnight shift in Somalia until you learn better.   Grrrrrr.

          This is the right way to do this is:

              ``#!/usr/bin/env python``

          ``env`` can reliably be found there and it will "discover"
          where ``python`` happens to actually be installed on that
          machine, so long as it is in $PATH somewhere.  Similarly,
          learn to use constructs like:

              ``DATE=$(which date)``

              ``DATE=${DATE:-/bin/date}``

          In short, **NEVER make assumptions where things are**. Always
          discover it at configuration time.


Preview Of Coming Attractions
-----------------------------

What I eventually (after many hours of whining, etc.) disovered, was that
getting this to work required a number of key things:

  1) Everything has to be built from source *in the directory location
     being targeted*.  The only exception is the ``brew`` program
     itself, which is position agnostic.  So, if I want to build
     a tools tree under ``/my/fine/tools``, then I have to clone
     ``linuxbrew`` into that directory and do the build from there.

  2) The initial build requires the OS compiler chain and related
     development tools to bootstrap up a minimal ``linuxbrew``
     environment capable of compiling everything else.  You can do
     this on your own machine (not recommended because you shouldn't
     be fidding around as root there), but a better way is to do it in
     a VM.  In my case, I made it even simpler by doing everything in
     ``docker`` containers.

  3) Once you have a bootstrapped ``linuxbrew`` environment running - i.e.,
     One that has a functioning ``gcc`` and supporting tool chain - you
     make a ``tar`` backup of it.  You then untar that onto a machine that has
     (almost) no native OS development tools on it and do the remainder
     of the installations from there.

     It's "almost" because - due to the aforementioned dain bramaged
     open source packages, You *have* to have the OS copies of
     ``autoconfig``, ``automake``, ``perl``, and ``make`` installed on
     your build machine.  These open source packages just *insist* that
     ``perl`` is always to be found under ``/usr/bin``, for example.

   4) When you're all done installing and configuring your
      ``linuxbrew`` environment, you just `tar`` it off somewhere
      safe.  You can then untar it onto any other Linux machine (with
      a reasonably current kernel) so long as you do so at the *same
      directory location under which it was built*.

      This lends itself nicely to automated deploys via tools like ``tsshbatch``
      or ``ansible``.  You build a master tarball of your "standard" tools
      tree and then use automated deployment to put it everywhere.


Doing It The ``docker`` Way
---------------------------

Like I said, you can do this in a VM, but the step-by-step
approach below uses ``docker`` containers which are easy to
setup and tear down for testing.


Resources
---------




Document Information
--------------------