Docker: Lightweight Linux Containers for Consistent Development and Deployment | Linux Journal
#An informative article form Linux Journal on Docker.
Docker: Lightweight Linux Containers for Consistent Development and Deployment | Linux Journal.
An informative article form Linux Journal on Docker.
Docker: Lightweight Linux Containers for Consistent Development and Deployment | Linux Journal.
I have started posting up my builds of Docker.io. They are unofficial, and unsupported by the community, pending official support and code release supporting 32-bit architectures.
https://drive.google.com/drive/u/0/#folders/0Bx5ME-up1Usbb2JMdVBvNGFSTUE
I have setup my system to auto-build every week and post to this shared directory. There’s a readme in the shared folder.
Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.
Right now I run three different containers – one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.
How Splitting A Computer Into Multiple Realities Can Protect You From Hackers | WIRED.
NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.
EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.
Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.
A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult – you can use LXC or debootstrap for that. Here is a quick tutorial on that.
I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable - I can get rid of it each time I do a build.
I utilise several scripts – one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.
EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.
EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.
As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.
Script 1: rebuildDockerLXC.sh
This script does the rebuild of my LXC build environment
First, I take down the container if it already exists. I named my container “Ubuntu” for simplicity.
lxc-stop -n Ubuntu
Next, destroy it.
lxc-destroy -n Ubuntu
Now create a new one using the Ubuntu template. Here, I also inject my SSH public key into the container so I can use passwordless SSH
IMPORTANT NOTE: If you are NOT running Ubuntu Trusty, you MUST use the “–release” option. If you are running on an x86 architecture, and want to compile a 32-bit version, you MUST also use the “–arch i386” (otherwise LXC will pull the amd64 packages down instead). There is a problem with the Go build script with Utopic. Hopefully to be fixed at some point in the future.
lxc-create -n Ubuntu -t ubuntu – –release trusty –arch i386 –auth-key /home/user/.ssh/id_rsa.pub
Start up the container, and send it to the background
lxc-start -n Ubuntu -d
Wait till LXC reports the IP address of the container, then assign it to a variable for reuse later. We do this by waiting for LXC to report the IP then running ‘ifconfig’ within the container to get the IP as seen by the container. The ‘lxc-info’ command can return two IP addresses – the actual one, and the bridge, and it is not always obvious which one is which.
while [
lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l
-ne 1 ];
do
sleep 1s
done
IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1
echo Main IP: $IP
Container is setup, take it down now.
lxc-stop -n Ubuntu
Script 2: compilerDocker.sh
This script is the wrapper script around the build process. It starts up the build container, runs the build in the container, then pulls the resulting output from the container after the build is done, extracting it to the current folder
First, check if we should rebuild the build environment. I normally do, to guarantee a clean slate each time I run the build.
echo -n “Rebuild Docker build environment (Y/N)? "
read REPLY
case “$REPLY” in
Y|y)
echo Rebuilding docker build environment
./rebuildDockerLXC.sh #If you want to rebuild the LXC container for each build
;;
N|n|*)
echo Not rebuilding docker build environment
;;
esac
Start/restart the build container
lxc-stop -n Ubuntu
lxc-start -n Ubuntu -d
Get the IP address of the container
while [
lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l
-ne 1 ];
do
sleep 1s
done
IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1
echo Main Container IP: $IP
Now push the compile script to the container. This will fail whilst the container starts up, so I keep retrying
echo Pushing script to IP $IP
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu
while [ $? -ne 0 ]
do
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu
done
With the container started, we can invoke the compile script within the container. This does the build and will take a while.
lxc-attach -n Ubuntu ‘/home/ubuntu/dockerCompile.sh’
Now, after the build is done, pull the results from the container
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa ubuntu@$IP:/home/ubuntu/*.txz .
Take down the container
lxc-stop -n Ubuntu
Extract the package for use
for a in
ls *.txz
do
echo Extracting $a
tar -xvvvf $a && rm $a
done
Done.
Script 3: dockerCompile.sh
This script is run inside the container and performs the actual build. It is derived mostly from the Dockerfile that is included in the Docker.io repository, with some tweaks.
First, we install the basic packages for compiling
cd /home/ubuntu
echo Installing basic dependencies
apt-get update && apt-get install -y aufs-tools automake btrfs-tools build-essential curl dpkg-sig git iptables libapparmor-dev libcap-dev libsqlite3-dev lxc mercurial parallel reprepro ruby1.9.1 ruby1.9.1-dev pkg-config libpcre* –no-install-recommends
Then we pull the Go repository.
hg clone -u release https://code.google.com/p/go ./p/go
cd ./p/go/src
./all.bash
cd ../../../
We setup variables for the Go environment
export GOPATH=$(pwd)/go
export PATH=$GOPATH/bin:$PATH:$(pwd)/p/go/bin
export AUTO_GOPATH=1
Next, we pull from the lvm2 repository to build a version of devmapper needed for static linking.
git clone https://git.fedorahosted.org/git/lvm2.git
cd lvm2
(git checkout -q v2_02_103 && ./configure –enable-static_link && make device-mapper && make install_device-mapper && echo lvm build OK!) || (echo lvm2 build failed && exit 1)
cd ..
EDIT see this link for extra code that should go here.
Next, get the docker source
git clone https://github.com/docker/docker $GOPATH/src/github.com/docker/docker
Now the important bit. We patch the source code to remove the 64-bit arch restriction.
for f in
grep -r “if runtime.GOARCH != "amd64" {” $GOPATH/src/* | cut -d: -f1
do
echo Patching $f
sed -i ’s/if runtime.GOARCH != “amd64” {/if runtime.GOARCH != “amd64” && runtime.GOARCH != “386” {/g' $f
done
Finally, we build docker. We utilise the Docker build script, which gives a warning as we are not running in a docker environment (we can’t at this time, since we have no usable docker binary)
cd $GOPATH/src/github.com/docker/docker/
./hack/make.sh binary
cd ../../../../../
Assuming the build succeeded, we should be able to bundle the binaries (this will be copied off by the compileDocker.sh script)
cd go/src/github.com/docker/docker/bundles
for a in
ls
do
echo Creating $a.txz
tar -cJvvvvf $a.txz $a
mv *.txz ../../../../../../
cd ../../../../../../
done
And that’s it. How to build docker on a 32-bit arch.
After much tinkering and cursing, I finally managed to get Linux Container running. I had originally wanted a Fedora container, but for some unknown reason, the container would not start. Instead, I tried a CentOS 6 container, and that started up successfully, so I am using that instead. It is actually good, because I can tinker with the CentOS container, experiment with different configurations, maybe practise setting it up as a proper (i.e. no GDM) server. This will help if I decide to go for a Red Hat-themed Linux certification.
Still bugging me why the Fedora 20 container won’t start, though.
Wow, you learn something new everyday. I’ve just found out about two variations on virtualisation. Linux Containers (LXC) and Vagrant.
Linux Containers (LXC) is known as OS-level virtualisation, meaning the kernel looks after the virtualisation, and there is no need for some extra management software along the lines of VMWare or Virtualbox. The guest OSes run as containers, similar to chroot jails, and all containers, including the main one you booted from, share the same kernel and resources as your main container. As such, LXC only supports linux-based guest OSes. You can’t (easily, anyway) run Windows under LXC. Homepage, Wikipedia.
Vagrant is a strange one. It sells itself as being a way to keep development environments consistent, and I can understand why – if you have a team of people all with a VM of the same OS, but end with different results because they have tinkered with the settings on the VM OS, Vagrant prevents this by keeping the core one in the cloud, and each time the machine is started up, it checks itself against the cloud version, updating itself if needed. That guarantees consistency. Homepage, Wikipedia.
I haven’t tried both of these tools in great detail yet, but here’s some related links for you to check out:
Had my first encounter of Linux, or specifically, a linux-like environment in a corporate environment. The IT peops were trying to setup an environment on Xenserver, and they had setup a storage space to copy a virtual machine image onto. But they kept running out of space. It took me a while to figure out what they were doing (wrong), though.
They were trying to copy onto the PV partition, and Xenserver had setup its environment to use LVM, so the PV partition was already allocated to the LVM system, and therefore had no space to copy onto.
After figuring out which LV was the one they wanted to use, I had problems mounting, with mount saying I had to specify the filesystem. After trying various switches with mount and specifying a filesystem (only NFS, ext, ext2 and ext3 were supported by Xenserver. No vfat, ntfs or btrfs. Admittedly, however, the Xenserver version the IT people were using was an older version), I soon found out that the IT people had created the storage space, but not done anything else. Therefore, that would explain why I couldn’t mount it – it hadn’t been formatted. So a simple mkfs.ext3 (remember ext4 wasn’t supported) on the block device in /dev/mapper/ meant I could mount it without specifying filesystem. scp’ing into the server and copying into the path proved it worked.