ansible | 4 years ago | ||
dockerfiles | 4 years ago | ||
README.md | 4 years ago |
This repo provides a fast track to spinning up docker containers as "servers". You can log into these and do most of the things you do an a "real" server or VM.
ansible
to create and destroy a docker networkansible
to create, restart, and destroy docker serversssh
keys are managed on a docker instancessh
access to a docker instanceBut this is not a "toy" system. What you see here is a public subset of what we use all the time here at the TundraWare Intergalactic HQ. We use this for software development, testing new distributed computing ideas, and doing custom builds in a sanitized environment.
The content of this repo assumes you have done several things:
ansible
installed on your machine/shared
exists on your host machine with permissions 1777
Here's the 10,000 foot view of what you'll have to do once the Prep Work above is done:
ansible
to start a docker network and the sandboxesVarious parts of this repo assume that there are (up to) 10 running sandboxes whose names are dockersand1
through dockersand10
. For this to work, you have to configure name resolution to properly associate these names with their equivant IP addresses.
Most likely, you don't have control of your DNS configuration. The easy way around this is to add the entries you find in dockerfiles/common/etc/dockersand.hosts
to your own /etc/hosts
file.
Getting a docker container running requires it to be built from an "image". Images are built from something called a "dockerfile". It is this file the specifies on which Linux distro your containers will be based. It also specifies any special configuration or software installation you want in your containers. By setting up the image with this stuff in it ahead of time, it will be present every time you start a new container.
There are two dockerfiles in this repo. To build the corresponding images, do this:
cd dockerfiles ./build-img.sh dockersand-centos7 ./build-img.sh dockersand-ubuntu
The creation and destruction of the sandboxes is automated using ansible "playbooks".
In each case you are creating/destroying 10 separate sandboxes.
To build the sandboxes and their network:
cd ansible ansible-playbook -i inventory/dockersand playbooks/dockersand/dockersand_build.yml
To destroy the sandboxes and their network:
cd ansible ansible-playbook -i inventory/dockersand playbooks/dockersand/dockersand_destroy.yml
To rebuild the sandboxes and their network:
cd ansible ansible-playbook -i inventory/dockersand playbooks/dockersand/dockersand_rebuild.yml
By default, both the build and rebuild create sandboxes based on the centos7 image. But you can override this on the command line to specify a different image. Just add this to the end of the playbook command line:
--extra-vars "dockersand_image=dockersand-ubuntu"
These sandboxes are setup so you can login from your host machine into the running sandboxes using ssh
keys. You will find the keys under dockerfiles/common/.ssh/
. There is also an ssh
configuration stanza you should add to your own ~/.ssh/config
to get your client to use the proper key.
However, it is also possible to login using name (test
) and password (test
).
Once you are logged in, you can promote yourself to root
using the sudo
command without any further password required.
The sandboxes are created to share the /shared
directory with the host machine. Any file you put there is visible from any of the sandboxes and/or the host machine. This makes it easy to share or move data between the host and any of the sandboxes or between the sandboxes themselves.
Not only is this tooling useful for building and using sandboxes, it's a good way to learn how docker and ansible work. There are comments throughout to help explain what's going on and why.
Here are a few ideas of how to expand on what you see here:
Try creating your own, new dockerfile for a different distro like, say, debian or arch.
Find where the docker network subnet is specified and change it to something else. Don't forget to update /etc/hosts
accordingly.
While in one sandbox, ssh into another. Notice that this just works. That's because the images are built with the proper ssh keys in place in the docker image. Thus, every container has them. Notice that the name-to-IP association does not exist in the container's own /etc/hosts
. Do some research to figure out why it isn't needed.
The dockerfiles currently load a lot of software by default. Try factoring this out into separate ansible playbooks to load the software after the sandboxes are up and running. You'll have to parameterize it to account for the different software installation models and package names in the different distros.