diff --git a/divorce_your_linux_admin.rst b/divorce_your_linux_admin.rst index fc00b14..e8a3a00 100644 --- a/divorce_your_linux_admin.rst +++ b/divorce_your_linux_admin.rst @@ -9,7 +9,7 @@ pretty quickly that ``root`` is hard to get, it takes a ton of paperwork to get anything done, and you usually have to wait forever. I've actually had the experience of waiting for 6 weeks to get -permission to install as symlink ... *and I had* ``root``! +permission to install a single symlink ... *and I had* ``root``! There is a good reason for this, of course. Security threats are very real, lawsuits are ominipresent, and the Geniuses In Charge (tm) are @@ -21,7 +21,7 @@ So ... is there a better way? Is there a way to eliminate the requirement for ``root`` in most day-to-day things we need to do as users and developers. Is there a way we can comply with the required -corporte security constraints, but still run our own happy show? The +corporate security constraints, but still run our own happy show? The answer is a qualified "Yes". Some things do- and always will need ``root``: Managing devices, @@ -43,22 +43,23 @@ Wouldn't it be nice if we could implement package management in userland in a way that is repeatable, can be automated, and gives us control of our own universe without having to beg for ``root`` changes -or have to wait for the vendor to release a new package. Well, +or have to wait for the vendor to release a new version. Well, Sparky, we have the techology to do just that. It's worth mentioning that the approach outlined below is especially handy with cloud and on-demand computing. It makes automating your -deploys pretty simple. It's also actually pretty handy on your own -machines when you *do* have ``root``. The less you use superuser, the -less chance you'll screw something up. +deploys pretty simple. It's also actually handy on your own machines +when you *do* have ``root``. The less you use superuser, the less +chance you'll screw something up. .. WARNING:: What follows has been implemented on an experimental - basis. It's been tested in only a very limited number - of systems but seems to work well. However, you should - do your own detailed testing before deploying this into - a production environment. Failure to do so may result - in broken systems, hallway snikering, hives, and being - transferred to your new development shop in Adak, AK. + basis. It's been tested in only a very limited number of + systems but seems to work well. However, you should do + your own detailed testing before deploying this into a + production environment. Failure to do so may result in + broken systems, hallway snickering, hives, and being + transferred to your new development shop in Moose + Dropping Pass, Alaska. MacOS Rescues Linux @@ -67,12 +68,13 @@ The approach we're going to describe got started in the Mac OSX world. Back when Apple finally came to their senses, and switched their OS to a Unix-base (FreeBSD 4.4), they only partly implemented the shell -tools everyone had come to know and love. The ``brew`` project got -spun up to allow any OSX user to install the command line applications -they knew and loved from Unix. ``brew`` is essentially a userland -package management system which can be run and modified without superuser -power. Many of the ``brew`` packages (these days, perhaps all, I haven't -checked) actually download a pre-compiled version under ``/usr/local``. +tools everyone had come to know and love. The ``homebrew`` project +got spun up to allow any OSX user to install the command line +applications they knew and loved from Unix. ``homebrew`` is +essentially a userland package management system which can be run and +modified without superuser power. Many of these packages (these days, +perhaps all, I haven't checked) actually download a pre-compiled +version under ``/usr/local``. This ended up being pretty popular with advanced Mac users. So much so, that a derivative project, ``linuxbrew``, got spun up to take the @@ -86,13 +88,14 @@ Many Linux binaries are sensitive to where they are installed, where they can find their supporting libraries and a host of other things. So, if I install a binary with ``linuxbrew`` somewhere other than the -default ``/home/linuxbrew``, it's likely not going to work. And I -*wanted* that to work. I wanted to have a way of creating a tools -tree wherever I jolly well felt like putting it. +default ``/home/linuxbrew``, it's likely not going to work. But that +ability is exactly what I needed. Each different application, user, +or service ID should be free to install their desired tool set +wherever they wish. "So", sez me, "I'll just use ``linuxbrew`` to automate the download, configuration, compliation, and installation of all the packages." -i.e. "I'll automate the build from source." (That roaring laughing +i.e., "I'll automate the build from source." (That roaring laughing you hear is coming from every Linux engineer who ever tried something like this.) @@ -101,23 +104,25 @@ making a position-dependent package management system work in a position-independent way is ... er, non-trivial. In fairness, it's not the fault of the ``linuxbrew`` people. They were super supportive -and helpful with all this. A lot of the issues have to do with the -packages themselves having embedded assumptions about where they can -find tools during the compilation phase. That's right, the *source -code and configurations* have hardwired assumptions about where they -would find things like ``perl`` and ``make``. +and helpful with all this. It wasn't their code that was the problem +(mostly, I did find a minor bug or two which the ``linuxbrew`` folks +fixed at light speed). Most of the issues had to do with the packages +themselves having embedded assumptions about where they can find tools +during the compilation phase. That's right, the *source code and +configurations* have hardwired assumptions about where they would find +things like ``perl`` and ``make``. At this point, the whole process had taken me a few dozen hours and I was sufficiently enraged that I just *had* to figure out. As we'll -see shortly, I think I finally go there. But, in the mean time ... +see shortly, I think I finally got there. But, in the mean time ... .. Note:: If you write software, config files, makefiles, test cases, - or any part of the software delivery ecosystem *with hardwired - paths to things emebedded in them*, you are officially a - big bozo. Not the fun kind with a red nose and big shoes either. - The *only* hardwired path that's - OK is ``/bin/sh`` on a shebang line. But if you do things - like this: + or any part of the software delivery ecosystem *with + hardwired paths to things emebedded in them*, you are + officially a big bozo. Not the fun kind with a red nose and + big shoes either. The *only* hardwired path that's OK is + ``/bin/sh`` on a shebang line. But if you do things like + this: ``#!/usr/bin/python`` @@ -130,7 +135,7 @@ ``env`` can reliably be found there and it will "discover" where ``python`` happens to actually be installed on that - machine, so long as it is in $PATH somewhere. Similarly, + machine, so long as it is in ``$PATH`` somewhere. Similarly, learn to use constructs like: ``DATE=$(which date)`` @@ -144,8 +149,8 @@ Preview Of Coming Attractions ----------------------------- -What I eventually (after many hours of whining, etc.) disovered, was that -getting this to work required a number of key things: +What I eventually discovered was that getting this to work required a +number of things: 1) Everything has to be built from source *in the directory location being targeted*. The only exception is the ``brew`` program @@ -174,29 +179,122 @@ ``perl`` is always to be found under ``/usr/bin``, for example. 4) When you're all done installing and configuring your - ``linuxbrew`` environment, you just `tar`` it off somewhere + ``linuxbrew`` environment, you just ``tar`` it off somewhere safe. You can then untar it onto any other Linux machine (with a reasonably current kernel) so long as you do so at the *same directory location under which it was built*. - This lends itself nicely to automated deploys via tools like ``tsshbatch`` - or ``ansible``. You build a master tarball of your "standard" tools - tree and then use automated deployment to put it everywhere. + This lends itself nicely to automated deploys via tools like + ``tsshbatch`` or ``ansible``. You build a master tarball of your + "standard" tools tree and then use automated deployment to put it + everywhere. Doing It The ``docker`` Way --------------------------- -Like I said, you can do this in a VM, but the step-by-step -approach below uses ``docker`` containers which are easy to -setup and tear down for testing. +Like I said, you can do this in a VM, but the step-by-step approach +below uses ``docker`` containers which are easy to setup and tear down +for testing. More importantly, you can install and remove native +system packages as you go without gumming up your host system. I've +used this approach extensively over the past several years for another +important reason: *I always have root on a container*. That makes it +trivial to do the required OS package management (installing- and +removing native compilers, for example). + +In my test environment, the containers have a number of properties. +You don't have to do it this way, of course, but it makes things +a lot simpler if you do: + + - They run sshd so I can log into them easily from the host + system. + + - I have the ability to log in as an unprivileged user (``test``) + or as ``root``. ``test`` also has the ability to ``sudo`` to + superuser. + + - They share a filesystem with the host so that I can read/write + files from any running container AND the files I do write + persist across container rebuilds. + + - The containers get started with the ``--security-opt + seccomp=unconfined`` option. Building ``emacs`` revealed the + need for this. By default, ``docker`` starts containers with + restricted access to many of the host OS system calls. It does + so in order to keep the container isolated from its host + environment. But this badly broke the ``emacs`` build which had + fits because the way the OS was allocating memory. The fix is + to use the above argument to give the container full access to + all the system call. You do *not* want to do this in normal + container operations. This is strictly for building things. + More information on this here: + + https://pastebin.tundraware.com/view/e309f836 + + +Let's Do This Already +--------------------- + + + +Gotchas +------- + +Here are a few things to keep in mind: + + - Some packages are just broken and require surgery to get working. + As of this writing ``socat`` stubbornly refuses to go in via this + process, for example. + + - When you bootstrap the system, you are building it with the OS' + own compilers and header files. If you later copy your work + to a machine with a wildly different older-, or new kernel you + may run into compatibility issues. The fix is to redo the above + on a host with the kernel version of interest. + Resources --------- +The main ``linuxbrew`` page is: + http://linuxbrew.sh + +The related GitHub projects are here: + + https://github.com/Linuxbrew + + +Author +------ + +:: + Tim Daneliuk + tundra@tundraware.com + + +Copyright And Licensing +----------------------- + +**Divorce Your Linux Admin** is Copyright (c) 2017 TundraWare Inc., +Des Plaines, IL 60018 USA + +Permission for unlimited distribution and use of this document is +hereby given so long as this document is reproduced in full. This +document may also be quoted in any part so long as original attribution +is provided with the quoted material. Document Information -------------------- + +You can find the latest version of this document at: + + http://www.tundraware.com/divorce + +A PDF version of the document may also be downloaded from: + + http://www.tundraware.com/lessons/divorce_your_linux_admin.pdf + +This document was produced using ``reStucturedText`` and ``TeXLive``.