
Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.
Right now I run three different containers – one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.
How Splitting A Computer Into Multiple Realities Can Protect You From Hackers | WIRED.

I am getting pretty peeved with Google recently. I have a huge amount of music on my Google Music library, so much in fact, that I hit Google’s track limit for uploads. Now, I’m trying to download my purchased music back to my machine, but their MusicManager is winding me up no end. It downloads for a while, then stops, thinking it has finished, with several tracks not downloaded. I restart the download, and it goes on a bit more then stop again.
Google suggested a few things, eventually ending up blaming my ISP. But there isn’t much alternative for me. Other than my current ISP, I can only use my corporate connection, but that requires a proxy - something Google do not support on MusicManager, or using Tor, which also doesn’t work properly. They suggested using the Google Music app, but that only works (if it ever does) on a single album.
I even tried using AWS and Google Cloud, but the app ties to MAC and refuses to identify my machine (which is a virtual machine). I also tried using an LXC contain, and that worked for a bit longer, but also died. So now, I’m trying using a Docker image. Slightly different concept, but lets see if it works.
If that doesn’t work, I’m going to try using TAILS.
EDIT: Docker image didn’t work. So anything with a “true” virtual environment such as AWS, GC, and Docker don’t seem to work (VirtualBox will probably be in this list too), anything else (LXC, e.g.) will work, but fail later.
NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.
EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.
Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.
A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult – you can use LXC or debootstrap for that. Here is a quick tutorial on that.
I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable - I can get rid of it each time I do a build.
I utilise several scripts – one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.
EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.
EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.
As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.
Script 1: rebuildDockerLXC.sh
This script does the rebuild of my LXC build environment
First, I take down the container if it already exists. I named my container “Ubuntu” for simplicity.
lxc-stop -n Ubuntu
Next, destroy it.
lxc-destroy -n Ubuntu
Now create a new one using the Ubuntu template. Here, I also inject my SSH public key into the container so I can use passwordless SSH
IMPORTANT NOTE: If you are NOT running Ubuntu Trusty, you MUST use the “–release” option. If you are running on an x86 architecture, and want to compile a 32-bit version, you MUST also use the “–arch i386” (otherwise LXC will pull the amd64 packages down instead). There is a problem with the Go build script with Utopic. Hopefully to be fixed at some point in the future.
lxc-create -n Ubuntu -t ubuntu – –release trusty –arch i386 –auth-key /home/user/.ssh/id_rsa.pub
Start up the container, and send it to the background
lxc-start -n Ubuntu -d
Wait till LXC reports the IP address of the container, then assign it to a variable for reuse later. We do this by waiting for LXC to report the IP then running ‘ifconfig’ within the container to get the IP as seen by the container. The ‘lxc-info’ command can return two IP addresses – the actual one, and the bridge, and it is not always obvious which one is which.
while [
lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l
-ne 1 ];
do
sleep 1s
done
IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1
echo Main IP: $IP
Container is setup, take it down now.
lxc-stop -n Ubuntu
Script 2: compilerDocker.sh
This script is the wrapper script around the build process. It starts up the build container, runs the build in the container, then pulls the resulting output from the container after the build is done, extracting it to the current folder
First, check if we should rebuild the build environment. I normally do, to guarantee a clean slate each time I run the build.
echo -n “Rebuild Docker build environment (Y/N)? "
read REPLY
case “$REPLY” in
Y|y)
echo Rebuilding docker build environment
./rebuildDockerLXC.sh #If you want to rebuild the LXC container for each build
;;
N|n|*)
echo Not rebuilding docker build environment
;;
esac
Start/restart the build container
lxc-stop -n Ubuntu
lxc-start -n Ubuntu -d
Get the IP address of the container
while [
lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l
-ne 1 ];
do
sleep 1s
done
IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1
echo Main Container IP: $IP
Now push the compile script to the container. This will fail whilst the container starts up, so I keep retrying
echo Pushing script to IP $IP
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu
while [ $? -ne 0 ]
do
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu
done
With the container started, we can invoke the compile script within the container. This does the build and will take a while.
lxc-attach -n Ubuntu ‘/home/ubuntu/dockerCompile.sh’
Now, after the build is done, pull the results from the container
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa ubuntu@$IP:/home/ubuntu/*.txz .
Take down the container
lxc-stop -n Ubuntu
Extract the package for use
for a in
ls *.txz
do
echo Extracting $a
tar -xvvvf $a && rm $a
done
Done.
Script 3: dockerCompile.sh
This script is run inside the container and performs the actual build. It is derived mostly from the Dockerfile that is included in the Docker.io repository, with some tweaks.
First, we install the basic packages for compiling
cd /home/ubuntu
echo Installing basic dependencies
apt-get update && apt-get install -y aufs-tools automake btrfs-tools build-essential curl dpkg-sig git iptables libapparmor-dev libcap-dev libsqlite3-dev lxc mercurial parallel reprepro ruby1.9.1 ruby1.9.1-dev pkg-config libpcre* –no-install-recommends
Then we pull the Go repository.
hg clone -u release https://code.google.com/p/go ./p/go
cd ./p/go/src
./all.bash
cd ../../../
We setup variables for the Go environment
export GOPATH=$(pwd)/go
export PATH=$GOPATH/bin:$PATH:$(pwd)/p/go/bin
export AUTO_GOPATH=1
Next, we pull from the lvm2 repository to build a version of devmapper needed for static linking.
git clone https://git.fedorahosted.org/git/lvm2.git
cd lvm2
(git checkout -q v2_02_103 && ./configure –enable-static_link && make device-mapper && make install_device-mapper && echo lvm build OK!) || (echo lvm2 build failed && exit 1)
cd ..
EDIT see this link for extra code that should go here.
Next, get the docker source
git clone https://github.com/docker/docker $GOPATH/src/github.com/docker/docker
Now the important bit. We patch the source code to remove the 64-bit arch restriction.
for f in
grep -r “if runtime.GOARCH != "amd64" {” $GOPATH/src/* | cut -d: -f1
do
echo Patching $f
sed -i ’s/if runtime.GOARCH != “amd64” {/if runtime.GOARCH != “amd64” && runtime.GOARCH != “386” {/g' $f
done
Finally, we build docker. We utilise the Docker build script, which gives a warning as we are not running in a docker environment (we can’t at this time, since we have no usable docker binary)
cd $GOPATH/src/github.com/docker/docker/
./hack/make.sh binary
cd ../../../../../
Assuming the build succeeded, we should be able to bundle the binaries (this will be copied off by the compileDocker.sh script)
cd go/src/github.com/docker/docker/bundles
for a in
ls
do
echo Creating $a.txz
tar -cJvvvvf $a.txz $a
mv *.txz ../../../../../../
cd ../../../../../../
done
And that’s it. How to build docker on a 32-bit arch.