README.md | 2 years ago | ||
brewenvs | 6 years ago | ||
makefile | 1 year ago |
This is automation support for the linuxbrew-based tools procedure documented at Divorce Your Linux Admin.
This document is a bit outdated as it does not describe the simplified build noted below, but the document does describe the overall idea of what we're trying to accomplish.
When this project was first conceived, the idea was to build some docker instances and use them as "build machines". Initially, we built a minimal tool set, installed it on another docker instance, and then built the remainder there.
This turned out to be unnecessary. You can do everything on one machine. For that matter, you don't even have to do the bootstrap phase separately from the full implementation. You can change the makefile
to just build everything in one go, if you prefer. We still like building, and saving, a minimal bootstrap image just because it provides a nice baseline for subsequent testing and changes to the full build.
You also don't need to use docker at all. You can use any VM or Linux machine to do this work so long as the libraries, headers, and compilers on it are compatible (i.e., close enough in version) to the machines where you want to run the tools.
This makefile
handles both the bootstrapping and then the full release of a custom linuxbrew based toolset, installed at any location you wish (so long as you have write permission there).
Before doing anything, edit the variables as the top of the makefile
to reflect where you want the built tarballs to be exported, where you intend to install the tools, what to build during the bootstrap phase, and what set of tools you want installed.
NOTE: The makefile
assumes RedHat/CentOS style package management. That's because we run this inside of CentOS docker
containers, even if we're working on debian or Ubuntu systems. You'll have to update the file if you use apt-get
package management.
Both the bootstrap and full build process create tarballs and rename the tools directory with a version stamp in the form, YYYYMMDD
. The idea is to allow multiple verisons of your toolsets to exist under ${INSTALLDIR}
. You simply create a symlink in that directory named ${TOOLS}
to point to the version you want. This makes certain automation use cases with tsshbatch
or ansible
somewhat simpler.
HOWEVER, during the actual build process described here, it is important that the directory be named canonically. That is, it should be located and named where you intend to deploy it. The binaries care - a lot - about where to look for their libraries and such. So, for instance, if you are deploying to /foo/bar/tools
, don't build it under /foo/bar/tools-20180324
. You build under /foo/bar/tools
. The release process will create a tarball that contains foo/bar/tools-YYYYMMDD
which you can untar to other machines (under /foo/bar/
). You can then either just rename it to tools
, or create a symlink called tools
that points to it.
This most likely bite you the first time you untar a bootstrap tarball to perform a full build. DAMHIKT.
When this process was first defined, we split the build into two steps. First, we did a "bootstrap" on a machine with a full set of local development tools and saved the result. Then, we loaded that minimal brew
image onto a machine with no local compiler to finish building the toolset.
After running this a bunch of times on a variety of VMs and containers, we discovered that this two-step process is unnecessary, and may even make it harder for the process to complete.
That process is still described in the following sections but the easier - and more likely to succeed - way to do this is to do everything on a single machine that has system compilers and languages installed:
. brewenvs make getbrew bootstrap-build bootstrap-release full-build full-release clean
Notice that we still save off a small boostrap tarball. Why? There may be times when you want to start with a "minimal" brew
installation for other work. This process preserves both that minimal- and then, the full toolset you've defined as separate installable tarballs
Take care to read the note below to ensure that your build environment is the "right" one to build binaries that will work when deployed on your running systems.
Log into your build machine, VM, ordocker
image.
Make sure you have write permission to the installation directory.
Make sure the native OS compiler tools are installed. Do not include the tools directories in your ${PATH}
at this time. We want this phase of the build to be done entirely with system tools.
Because of some flakeyness on how openssl
builds during the bootstrapping processs, you also initally need some additional perl module support. On CentOS7:
sudo yum -y groupinstall "Development Tools" sudo yum -y install perl-Module-Load-Conditional perl-core
Get the linuxbrew image:
make getbrew
Build the bootstrap image:
make bootstrap-build
Build a release tarball and export it:
make bootstrap-release
Cleanup:
make clean
Log into your build machine, VM, ordocker
image. Make sure this machine does not have native OS compilers and development tools installed and/or in ${PATH}
! We want to use only the compiler and tools created in the previous step. Docker
containers are handy here: One for the bootstrap build, another for the self compiling full tools build.
Un-tar the bootstrap tarball created above into the proper location. Recall that this was saved with a date revision stamp. So, before proceeding, we have to:
cd ${INSTALLDIR} && mv -v ${TOOLS}-YYYYMMDD ${TOOLS}
Setup the required environment variables:
. brewenv
Make sure ${MYTOOLS}
and ${PIPMODULES}
include all the packages you want.
Build the full tool set using the bootstrapped compiler we just built:
make full-build
Export it for installation elsewhere:
make full-release
Cleanup:
make clean
We've just created a tarball that has all the tools we want precompiled and ready for distribution. We just untar the full tools tarball onto any other machine. The only restrictions are:
We must un-tar so that the tools directory ends up in the same location in the filesystem as where it was built. The binaries created above make assumptions about where to find their libraries and other dependencies. So, if we built the tools under:
/opt/mydir/tools
Every installation on other machines must also install them there (and be added to $(PATH}
as described in brewenv
).
Recall that this procedure actually creates the tools directory as:
/opt/mydir/tools-YYYYMMDD
In this example, you could either symlink tools
to that directory or just rename the directory accordingly.
The build- and target machines must have reasonably close kernel versions. That's because the bootstrap phase makes use of native OS header files that are kernel-dependent. If, say, you try to build this on a CentOS 7 instance, but then attempt to deploy to CentOS 5, expect problems. Always build your deploy image on an OS that is substantially the same as your targets. Again, docker
is your friend here.
In general, you will get into trouble if you try to build the tools on an OS whose libraries are newer than the target machines'. The binaries you build will not be able to find these newer libraries on the target machines and will fail. So, if you cannot match your targets exactly, build for the same machine architecture, but do it on an older revision of the OS. The newer targets usually have backward compatibility with the older libraries referenced and the tools should run fine.
brewenv
FileThe brewenv
file documents the environment variables that need to be set in order to access your installed binaries and support files. You may find this useful when doing the full build. You certainly will want these variables set when running a final installation of your tools.
Just be sure to edit it and change TOOLSDIR="/opt/TundraWare/tools"
to wherever your tools installation actually lives.
No! Once you have running installation, you can use brew
itself to do upgrades.
But beware, dragons therein lie (sometimes). When you do this, you will only be updating the tools that brew
knows have changed since you did the last build (or update). That's fine for the major things. But brew
doesn't usually know about the python
modules you've installed or custom perl
modules using cpan
. It's therefore possible to get weird version incompatibilities when an interpreter gets upgraded.
It is therefore recommended that you not do manual upgrading yourself, but do this instead:
make upgrade
This will both upgrade any relevant brew
components, and also upgrade the python
modules it initially installed itself. Obviously, it cannot catch things you've installed yourself thereafter. You can just add your own component upgrade commands to that same makefile
stanza.
Even so, you should periodically do a complete rebuild from the very beginning as outlined above. This will help minimize the accrued bitrot from incremental upgrades.
Always so any builds or upgrades with the destination directory named as it is found in the makefile
. The various release stanzas temporarily rename that directory to YYYYMMDD-myname
for purposes of creating release tarballs. You might be tempted to do this therefore:
ln -s YYYYMMDD-myname myname
That's fine so long as you make no changes or updates to what is in YYYYMMDD-myname. If you upgrade things or otherwise change them with commands like pip install mymodule
, you are asking for trouble. Why? Becase many programs have dependenies on where they are installed. Their installers will find their absolute and real path and use that for the dependency in your installation.
So, if you create the tools using the makefile
, it uses myname
as the install location. If you then later use the YYYYMMDD-myname
form with a symlink, and install or upgrade something, some of the binaries will depend on one name (the ones originally installed) and some will depend the new name (the ones you add or upgrade later). This officially causes Great Pain (tm). DAMHIKT.
Just follow these two rules and you'll be fine:
Any time you are installing, upgrading, deleting or otherwise changing the installation, make sure that the target directory is named myname
.
If you are merely using the tools there, it's fine to create a symlink like this:
myname -> YYYYMMDD-myname
This is handy if you want to work with different builds of the tools.