Blender Fox


Enabling and using Let's Encrypt SSL Certificates on Kubernetes

#

Kubernetes is an awesome piece of kit, you can set applications to run within the cluster, make it visible to only apps within the cluster and/or expose it to applications outside of the cluster.

As part of my tinkering, I wanted to setup a Docker Registry to store my own images without having to make them public via docker hub.  Doing this proved a bit more complicated than expected since by default, it requires SSL which requires a certificate to be purchased and installed.

Enter Let’s Encrypt which allows you to get SSL certificates for free; and by using their API, you can set it to regularly renew. Kubernetes has the kube-lego project which allows this regular integration. So here, I’ll go through enabling an application (in this case, it’s a docker registry, but it can be anything).

First, lets ignore the lego project, and set up the application so that it is accessible normally. As mentioned above, this is the docker registry

I’m tying the registry storage to a pv claim, though you can modify this to tie to S3, instead etc.

[code lang=text]

kind: Deployment apiVersion: extensions/v1beta1 metadata: name: registry namespace: default labels: name: registry spec: replicas: 1 selector: matchLabels: name: registry template: metadata: creationTimestamp: labels: name: registry spec: volumes: - name: registry-data persistentVolumeClaim: claimName: registry-data containers: - name: registry image: registry:2 resources: {} volumeMounts: - name: registry-data mountPath: “/var/lib/registry” terminationMessagePath: “/dev/termination-log” terminationMessagePolicy: File imagePullPolicy: Always restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler strategy: type: Recreate

kind: Service apiVersion: v1 metadata: name: registry namespace: default labels: name: registry spec: ports:

[/code]

Once you’ve applied this, verify your config is correct by ensuring you have an external endpoint for the service (use kubectl describe service registry | grep “LoadBalancer Ingress”). On AWS, this will be an ELB, on other clouds, you might get an IP. If you get an ELB, CNAME a friendly name to it. If you get an IP, create an A record for it. I’m going to use registry.blenderfox.com for this test.

Verify by doing this. Bear in mind it can take a while before DNS records updates so be patient.

host $(SERVICE_DNS)

So if I had set the service to be registry.blenderfox.com, I would do

host registry.blenderfox.com

If done correctly, this should resolve to the ELB then resolve to the ELB IP addresses.

Next, try to tag a docker image of the format registry-host:port/imagename, so, for example, registry.blenderfox.com:9000/my-image.

Next try to push it.

docker push registry.blenderfox.com:9000/my-image

It will fail because it can’t talk over https

docker push registry.blenderfox.com:9000/my-image
The push refers to repository [registry.blenderfox.com:9000/my-image]
Get https://registry.blenderfox.com:9000/v2/: http: server gave HTTP response to HTTPS client

So let’s now fix that.

Now let’s start setting up kube-lego

Checkout the code git clone git@github.com:jetstack/kube-lego.git

cd into the relevant folder cd kube-lego/examples/nginx

Start applying the code base

[code lang=text] kubectl apply -f lego/00-namespace.yaml kubectl apply -f nginx/00-namespace.yaml kubectl apply -f nginx/default-deployment.yaml kubectl apply -f nginx/default-service.yaml [/code]

Open up nginx/configmap.yaml and change the body-size: “64m” line to a bigger value. This is the maximum size you can upload through nginx. You’ll see why this is an important change later.

[code lang=text] kubectl apply -f nginx/configmap.yaml kubectl apply -f nginx/service.yaml kubectl apply -f nginx/deployment.yaml [/code]

Now, look for the external endpoint for the nginx service kubectl describe service nginx -n nginx-ingress | grep “LoadBalancer Ingress”

Look for the value next to LoadBalancer Ingress. On AWS, this will be the ELB address.

CNAME your domain for your service (e.g. registry.blenderfox.com in this example) to that ELB. If you’re not on AWS, this may be an IP, in which case, just create an A record instead.

Open up lego/configmap.yaml and change the email address in there to be the one you want to use to request the certs.

[code lang=text] kubectl apply -f lego/configmap.yaml kubectl apply -f lego/deployment.yaml [/code]

Wait for the DNS to update before proceeding to the next step.

host registry.blenderfox.com

When the DNS is updated, finally create and add an ingress rule for your service:

[code lang=text]

kind: Ingress apiVersion: extensions/v1beta1 metadata: name: registry namespace: default annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: ‘true’ spec: tls:

Look add the logs in nginx-ingress/nginx and you’ll see the Let’s Encrypt server come in to validate:

100.124.0.0 - [100.124.0.0] - - [19/Jan/2018:09:50:19 +0000] "GET /.well-known/acme-challenge/[REDACTED] HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" 277 0.044 100.96.0.3:8080 87 0.044 200

And look in the logs on the kube-lego/kube-lego pod and you’ll see the success and saving of the secret

time="2018-01-19T09:49:45Z" level=info msg="requesting certificate for registry.blenderfox.com" context="ingress_tls" name=registry namespace=default 
time="2018-01-19T09:50:21Z" level=info msg="authorization successful" context=acme domain=registry.blenderfox.com 
time="2018-01-19T09:50:47Z" level=info msg="successfully got certificate: domains=[registry.blenderfox.com] url=https://acme-v01.api.letsencrypt.org/acme/cert/[REDACTED]" context=acme 
time="2018-01-19T09:50:47Z" level=info msg="Attempting to create new secret" context=secret name=registry-tls namespace=default 
time="2018-01-19T09:50:47Z" level=info msg="Secret successfully stored" context=secret name=registry-tls namespace=default 

Now let’s do a quick verify:

curl -ILv https://registry.blenderfox.com
...
* Server certificate:
*  subject: CN=registry.blenderfox.com
*  start date: Jan 19 08:50:46 2018 GMT
*  expire date: Apr 19 08:50:46 2018 GMT
*  subjectAltName: host "registry.blenderfox.com" matched cert's "registry.blenderfox.com"
*  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
*  SSL certificate verify ok.
...

That looks good.

Now let’s re-tag and try to push our image

docker tag registry.blenderfox.com:9000/my-image registry.blenderfox.com/my-image
docker push registry.blenderfox.com/my-image

Note we are not using a port this time as there is now support for SSL.

BOOM! Success.

The tls section indicates the host to request the cert on, and the backend section indicates which backend to pass the request onto. The body-size config is at the nginx level so if you don’t change it, you can only upload a maximum of 64m even if the backend service (docker registry in this case) can support it. I have it set here at “1g” so I can upload 1gb (some docker images can be pretty large)

The Illustrated Children's Guide to Kubernetes

#

Kubernetes confusing you? This is a really nice short video explaining the basic concepts of Kubernetes

www.youtube.com/watch

 

Kali Linux Pentesting Distribution -- Now Runnable in Browser

#
Everyone loves hearing about pentesting and ethical hacking distros these days, and it looks like it is even becoming a trend among aspiring security professionals.

Therefore, today we have some good news for those who want to try one of the best penetration testing and security auditing operating systems based on the Linux kernel, Kali Linux, the successor of the popular BackTrack, and don’t have the resources to run the Live CD or install the OS on their computers.

Network security specialist Jerry Gamblin has created a project called KaliBrowser, which, if you haven’t already guessed, it allows you to run the famous Kali Linux operating system on a web browser, using the Kali Linux Docker image, Openbox window manager, and NoVNC HTML5-based VNC client.

Source: news.softpedia.com/news/you-…

Docker Builds - Update

#

As I relied on grive to do the sync between my local machine and Google Drive, where the builds were stored, I found out (at work, ironically, since we use some Google APIs), that Google shut off some of their APIs on 20th April, which killed some of our functionality and also, killed grive’s functionality with some really cryptic messages in the console window. Nonetheless, I found that an alternative, “drive” works, although a hell of a lot slower.

Docker Update

#

Build script now builds correctly under Utopic. Also modified the Jenkins job to include the commit version so you can see the commit active at the time of the build.

 

Docker: Lightweight Linux Containers for Consistent Development and Deployment | Linux Journal

#

An informative article form Linux Journal on Docker.

Docker: Lightweight Linux Containers for Consistent Development and Deployment | Linux Journal.

Docker.io Builds Page For 32-bit Architectures

#

I have started posting up my builds of Docker.io. They are unofficial, and unsupported by the community, pending official support and code release supporting 32-bit architectures.

https://drive.google.com/drive/u/0/#folders/0Bx5ME-up1Usbb2JMdVBvNGFSTUE

I have setup my system to auto-build every week and post to this shared directory. There’s a readme in the shared folder.

Docker.io Build Script Update

#

Some changes to the Docker.io code has caused the build script to fail, this was down to the code now using btrfs to build a driver. It has taken me a while to figure out how to fix that error message, but the script now works. You have to add this chunk of code anywhere before the main docker build

git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git
mv btrfs-progs btrfs #Needed to include into Docker code
export PATH=$PATH:$(pwd)
cd btrfs
make || (echo "btrfs compile failed" && exit 1)
export C_INCLUDE_PATH=$C_INCLUDE_PATH:$(pwd) #Might not be needed
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$(pwd) #Might not be needed
echo PATH: $PATH
cd ..

How Splitting A Computer Into Multiple Realities Can Protect You From Hackers

#

Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.

Right now I run three different containers – one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.

How Splitting A Computer Into Multiple Realities Can Protect You From Hackers | WIRED.

Building Docker.io on Ubuntu 32-bit

#

Interestingly, after upgrading to Ubuntu Utopic Unicorn, the build script I made for Docker.io fails during the Go build. Something inside the Utopic minimal install is not being liked by the Go build script, so for now, you will have to force the LXC container to use Trusty instead.

lxc-create -n Ubuntu -t ubuntu -- --release trusty --auth-key /home/user/.ssh/id_rsa.pub

 

 

Building Docker.io on 32-bit arch

#

NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.

EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.

Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.

A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult – you can use LXC or debootstrap for that. Here is a quick tutorial on that.

I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable - I can get rid of it each time I do a build.

I utilise several scripts – one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.

EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.

EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.

As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.


Script 1: rebuildDockerLXC.sh

This script does the rebuild of my LXC build environment

First, I take down the container if it already exists. I named my container “Ubuntu” for simplicity.

lxc-stop -n Ubuntu

Next, destroy it.

lxc-destroy -n Ubuntu

Now create a new one using the Ubuntu template. Here, I also inject my SSH public key into the container so I can use passwordless SSH

IMPORTANT NOTE: If you are NOT running Ubuntu Trusty, you MUST use the “–release” option. If you are running on an x86 architecture, and want to compile a 32-bit version, you MUST also use the “–arch i386” (otherwise LXC will pull the amd64 packages down instead). There is a problem with the Go build script with Utopic. Hopefully to be fixed at some point in the future.

lxc-create -n Ubuntu -t ubuntu – –release trusty –arch i386 –auth-key /home/user/.ssh/id_rsa.pub

Start up the container, and send it to the background

lxc-start -n Ubuntu -d

Wait till LXC reports the IP address of the container, then assign it to a variable for reuse later. We do this by waiting for LXC to report the IP then running ‘ifconfig’ within the container to get the IP as seen by the container. The ‘lxc-info’ command can return two IP addresses – the actual one, and the bridge, and it is not always obvious which one is which.

while [ lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l -ne 1 ]; do sleep 1s done IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1

echo Main IP: $IP

Container is setup, take it down now.

lxc-stop -n Ubuntu


Script 2: compilerDocker.sh

This script is the wrapper script around the build process. It starts up the build container, runs the build in the container, then pulls the resulting output from the container after the build is done, extracting it to the current folder

First, check if we should rebuild the build environment. I normally do, to guarantee a clean slate each time I run the build.

echo -n “Rebuild Docker build environment (Y/N)? " read REPLY case “$REPLY” in Y|y) echo Rebuilding docker build environment ./rebuildDockerLXC.sh #If you want to rebuild the LXC container for each build ;; N|n|*) echo Not rebuilding docker build environment ;; esac

Start/restart the build container

lxc-stop -n Ubuntu lxc-start -n Ubuntu -d

Get the IP address of the container

while [ lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l -ne 1 ]; do sleep 1s done IP=lxc-attach -n Ubuntu – ifconfig | grep ‘inet addr’ | head -n 1 | cut -d ‘:’ -f 2 | cut -d ' ' -f 1 echo Main Container IP: $IP

Now push the compile script to the container. This will fail whilst the container starts up, so I keep retrying

echo Pushing script to IP $IP scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu while [ $? -ne 0 ] do scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/dockerCompile.sh ubuntu@$IP:/home/ubuntu done

With the container started, we can invoke the compile script within the container. This does the build and will take a while.

lxc-attach -n Ubuntu ‘/home/ubuntu/dockerCompile.sh’

Now, after the build is done, pull the results from the container

scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa ubuntu@$IP:/home/ubuntu/*.txz .

Take down the container

lxc-stop -n Ubuntu

Extract the package for use

for a in ls *.txz do echo Extracting $a tar -xvvvf $a && rm $a done

Done.


Script 3: dockerCompile.sh

This script is run inside the container and performs the actual build. It is derived mostly from the Dockerfile that is included in the Docker.io repository, with some tweaks.

First, we install the basic packages for compiling

cd /home/ubuntu echo Installing basic dependencies apt-get update && apt-get install -y aufs-tools automake btrfs-tools build-essential curl dpkg-sig git iptables libapparmor-dev libcap-dev libsqlite3-dev lxc mercurial parallel reprepro ruby1.9.1 ruby1.9.1-dev pkg-config libpcre* –no-install-recommends

Then we pull the Go repository.

hg clone -u release https://code.google.com/p/go ./p/go cd ./p/go/src ./all.bash cd ../../../

We setup variables for the Go environment

export GOPATH=$(pwd)/go export PATH=$GOPATH/bin:$PATH:$(pwd)/p/go/bin export AUTO_GOPATH=1

Next, we pull from the lvm2 repository to build a version of devmapper needed for static linking.

git clone https://git.fedorahosted.org/git/lvm2.git cd lvm2 (git checkout -q v2_02_103 && ./configure –enable-static_link && make device-mapper && make install_device-mapper && echo lvm build OK!) || (echo lvm2 build failed && exit 1) cd ..

EDIT see this link for extra code that should go here.

Next, get the docker source

git clone https://github.com/docker/docker $GOPATH/src/github.com/docker/docker

Now the important bit. We patch the source code to remove the 64-bit arch restriction.

for f in grep -r “if runtime.GOARCH != "amd64" {” $GOPATH/src/* | cut -d: -f1 do echo Patching $f sed -i ’s/if runtime.GOARCH != “amd64” {/if runtime.GOARCH != “amd64” && runtime.GOARCH != “386” {/g' $f done

Finally, we build docker. We utilise the Docker build script, which gives a warning as we are not running in a docker environment (we can’t at this time, since we have no usable docker binary)

cd $GOPATH/src/github.com/docker/docker/ ./hack/make.sh binary cd ../../../../../

Assuming the build succeeded, we should be able to bundle the binaries (this will be copied off by the compileDocker.sh script)

cd go/src/github.com/docker/docker/bundles for a in ls do echo Creating $a.txz tar -cJvvvvf $a.txz $a mv *.txz ../../../../../../ cd ../../../../../../ done


And that’s it. How to build docker on a 32-bit arch.