Blender Fox


Google Domains and Terraform

#

Two major updates recently

Firstly, as suspected, I finally got the notification saying that Google Domains' registrations would be acquired by SquareSpace so all my registrations would be transferred over them should I not do anything. Obviously, I didn't want that, so I transferred them over to AWS (Route 53) so now my domains are registered with AWS, but are DNS managed by Cloud DNS. I had some weirdness when trying to migrate all my domains in bulk, with respect to the auth code not being accepted on one of the domains when doing the bulk migration, but it was accepted when I did the migration on that one domain alone, so... go figure.

Next update is Terraform. In case you didn't know, Hashicorp has changed the Terraform license and essentially made it no longer open source. This behaviour is similar to what Red Hat did with its RHEL offering and the backlash is just as bad.

Immediately I knew someone would fork it, and already, there's the OpenTF Initiative and this is the key part:

Our request to HashiCorp: switch Terraform back to an open source license.

We ask HashiCorp to do the right thing by the community: instead of going forward with the BUSL license change, switch Terraform back to a truly open source license, and commit to keeping it that way forever going forward. That way, instead of fracturing the community, we end up with a single, impartial, reliable home for Terraform where the whole community can unite to keep building this amazing ecosystem.

Our fallback plan: fork Terraform into a foundation.

If HashiCorp is unwilling to switch Terraform back to an open source license, we propose to fork the legacy MPL-licensed Terraform and maintain the fork in the foundation. This is similar to how Linux and Kubernetes are managed by foundations (the Linux Foundation and the Cloud Native Computing Foundation, respectively), which are run by multiple companies, ensuring the tool stays truly open source and neutral, and not at the whim of any one company.

OpenTF Initiative (https://opentf.org/)

Essentially, make Terraform open source again, or a fork from the MPL version will be made and maintained separately from Hashicorp's version. This will essentially lead to two, potentially diverging versions of Terraform, one BUSL and one MPL licensed

I'm already looking at alternatives and the two currently that I'm looking at are Ansible and Pulumi

Ansible I've had experience in , but there's two main issues with it:

Pulumi I've heard lots of good things about, but its a new technology, and I don't know if it is can "Import" existing infrastructure.

Guess a "spike" is worth doing for it.

Updates

#

It's been quite a long time since I did any updates on this blog so a couple of updates are in order

House

I've now been living in the new place for just over a year. Generally everything is good, we've started putting in a lawn and currently letting it get its roots in before we try to cut it

Twitter

Oh, boy, what an absolute train wreck. I've tolerated Elon's presence at Twitter because most of the stuff he did wasn't far off what Jack was doing prior. But cutting off all third party clients, forcing everyone onto the new TweetDeck (which likely will be paywalled too) has literally driven users away. Twitter has been losing users and revenue constantly, and it is totally unsurprising.

I've disabled my two TweetDeck profiles (a professional one, and a casual one) from Ferdium. But I doubt I will be going back any time soon.

Red Hat

Another train wreck of a situation. Red Hat's decision to first kill CentOS's stability, then for RHEL sources to only be accessible behind a subscription has pissed of a lot of users, even if it's not strictly speaking against license.

The only one left in its sights will be Fedora so that leads me onto the next update

Manjaro & Archlinux

I've been tinkering with Manjaro more and more lately, with its rolling release schedule meaning I never need to upgrade from a major version to another major version.

It's downside I'm finding is that some packages, especially those in the AUR are essentially "compile from source" packages which does the build on your machine during the install. This can take a varying amount of time depending on the code. With my RSS reader of choice: QuiteRSS, this build takes a mind numbing 2.5 hours to do, even on a high spec machine.

That's where I found out about setting up your own Arch repo. I've been tinkering with that, setting it up on GCP and fronted by a CDN. This works pretty well, but I still need to find how I can do scheduled builds to keep that up to date, but it looks like I'll be switching to Manjaro at some point in the near future. Sound works fine, using lyncolnmd's work.

Another downside with Manjaro, however is that its btrfs filesystem, my home directory backup, and CloneZilla don't seem to want to work well together

Wordpress & Domains

One final update, I will likely be transferring out the blenderfox.com domain out of Wordpress, while I had this registered as part of the blog, I'm finding it much harder to maintain this domain using Wordpress's very limited DNS management tools. I will likely transfer it out to Google Domains, even though there's talk of them shuttering that service. Secondary service would be AWS.

Training in Quarantine - Day 179

#

Late out today -- my phone wanted to upgrade so I attempted it (it was an upgrade from Android 9 to Android 10), and it didn't work, and I ended up having to factory reset and install from scratch. I did have some Titanium Backup backups, but they didn't seem to work a lot of the time :/

So for the most part, I just reinstalled all the apps I remember using and logged in. For most, that was fine. But I lost the MFA codes on Google Authenticator, meaning I had to remove and setup:

all over again

AWS was quick and painless after a security check to confirm I was who I said I was and they called me on the number on the account.

Wordpress was painless too -- I was already logged in, so just removed MFA and set it up again, then logged in again. Similarly with LastPass

GitLab however, is proving to be more of a pain. They no longer accept MFA removal requests for people on the Free plan. So I wonder if they will accept me going to a subscription model so I _can_ then request the MFA removal. I think it is better anyway, since I'm hitting the 400 minute CI limit pretty regularly. The 2000 minute CI limit would be better. At least until I can get my own GitLab install working.

As for the run, yes, it was a run -- well, more of a jog, anyway. Still did the 3km lap, doing it in 20 mins rather than the 30 mins it normally takes me when I walk it.

How to using S3 as a RWM/NFS-like store in Kubernetes

#

Let’s assume you have an application that runs happily on its own and is stateless. No problem. You deploy it onto Kubernetes and it works fine. You kill the pod and it respins, happily continuing where it left off.

Let’s add three replicas to the group. That also is fine, since its stateless.

Let’s now change that so that the application is now stateful and requires storage of where it is in between runs. So you pre-provision a disk using EBS and hook that up into the pods, and convert the deployment to a stateful set. Great, it still works fine. All three will pick up where they left off.

Now, what if we wanted to share the same state between the replicas?

For example, what if these three replicas were frontend boxes to a website? Having three different disks is a bad idea unless you can guarantee they will all have the same content. Even if you can, there’s guaranteed to be a case where one or more of the boxes will be either behind or ahead of the other boxes, and consequently have a case where one or more of the boxes will serve the wrong version of content.

There are several options for shared storage, NFS is the most logical but requires you to pre-provision a disk that will be used and also to either have an NFS server outside the cluster or create an NFS pod within the cluster. Also, you will likely over-provision your disk here (100GB when you only need 20GB for example)

Another alternative is EFS, which is Amazon’s NFS storage, where you mount an NFS and only pay for the amount of storage you use. However, even when creating a filesystem in a public subnet, you get a private IP which is useless if you are not DirectConnected into the VPC.

Another option is S3, but how do you use that short of using “s3 sync” repeatedly?

One answer is through the use of s3fs and sshfs

We use s3fs to mount the bucket into a pod (or pods), then we can use those mounts via sshfs as an NFS-like configuration.

The downside to this setup is the fact it will be slower than locally mounted disks.

So here’s the yaml for the s3fs pods (change values within {…} where applicable) – details at Docker Hub here: https://hub.docker.com/r/blenderfox/s3fs/

(and yes, I could convert the environment variables into secrets and reference those, and I might do a follow up article for that)

[code]

kind: Deployment apiVersion: extensions/v1beta1 metadata: name: s3fs namespace: default labels: k8s-app: s3fs annotations: {} spec: replicas: 1 selector: matchLabels: k8s-app: s3fs template: metadata: name: s3fs labels: k8s-app: s3fs spec: containers: - name: s3fs image: blenderfox/s3fs env: - name: S3_BUCKET value: {…} - name: S3_REGION value: {…} - name: AWSACCESSKEYID value: {…} - name: AWSSECRETACCESSKEY value: {…} - name: REMOTEKEY value: {…} - name: BUCKETUSERPASSWORD value: {…} resources: {} imagePullPolicy: Always securityContext: privileged: true restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600

kind: Service apiVersion: v1 metadata: name: s3-service annotations: external-dns.alpha.kubernetes.io/hostname: {hostnamehere} service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: “3600” labels: name: s3-service spec: ports:

[/code]

This will create a service and a pod

If you have external DNS enabled, the hostname will be added to Route 53.

SSH into the service and verify you can access the bucket mount

[code] ssh bucketuser@dns-name ls -l /mnt/bucket/ [/code]

(This should give you the listing of the bucket and also should have user:group set on the directory as “bucketuser”)

You should also be able to rsync into the bucket using this

[code] rsync -rvhP /source/path bucketuser@dns-name:/mnt/bucket/ [/code]

Or sshfs using a similar method

[code]

sshfs bucketuser@dns-name:/mnt/bucket/ /path/to/local/mountpoint

[/code]

Edit the connection timeout annotation if needed

Now, if you set up a pod that has three replicas and all three sshfs to the same service, you essentially have an NFS-like storage.

 

How to move from single master to multi-master in an AWS kops kubernetes cluster

#

Having a master in a Kubernetes cluster is all very well and good, but if that master goes down the entire cluster cannot schedule new work. Pods will continue to run, but new ones cannot be scheduled and any pods that die will not get rescheduled.

Having multiple masters allows for more resiliency and can pick up when one goes down. However, as I found out, setting multi-master was quite problematic. Using the guide here only provided some help so after trashing my own and my company’s test cluster, I have expanded on the linked guide.

First add the subnet details for the new zone into your cluster definition – CIDR, subnet id, and make sure you name it something that you can remember. For simplicity, I called mine eu-west-2c. If you have a definition for utility (and you will if you use a bastion), make sure you have a utility subnet also defined for the new AZ

[code lang=shell] kops edit cluster –state s3://bucket [/code]

Now, create your master instance groups, you need an odd number to enable quorum and avoid split brain (I’m not saying prevent, and there are edge cases where this could be possible even with quorum). I’m going to add west-2b and west-2c. AWS recently introduced the third London AWS zone, so I’m going to use that.

[code lang=shell] kops create instancegroup master-eu-west-2b –subnet eu-west-2b –role Master [/code]

Make this one have a max/min of 1

[code lang=shell] kops create instancegroup master-eu-west-2c –subnet eu-west-2c –role Master [/code]

Make this one have a max/min of 0 (yes, zero) for now

Reference these in your cluster config

[code lang=text] kops edit cluster –state=s3://bucket [/code]

[code lang=text] etcdClusters:

Start the new master

[code lang=shell] kops update cluster –state s3://bucket –yes [/code]

Find the etcd and etcd-event pods and add them to this script. Change “clustername” to the name of your cluster, then run it. Confirm the member lists include both two members (in my case it would be etc-a and etc-b)

[code lang=shell] ETCPOD=etcd-server-events-ip-10-10-10-226.eu-west-2.compute.internal ETCEVENTSPOD=etcd-server-ip-10-10-10-226.eu-west-2.compute.internal AZ=b CLUSTER=clustername

kubectl –namespace=kube-system exec $ETCPOD – etcdctl member add etcd-$AZ http://etcd-$AZ.internal.$CLUSTER:2380

kubectl –namespace=kube-system exec $ETCEVENTSPOD – etcdctl –endpoint http://127.0.0.1:4002 member add etcd-events-$AZ http://etcd-events-$AZ.internal.$CLUSTER:2381

echo Member Lists kubectl –namespace=kube-system exec $ETCPOD – etcdctl member list

kubectl –namespace=kube-system exec $ETCEVENTSPOD – etcdctl –endpoint http://127.0.0.1:4002 member list [/code]

(NOTE: the cluster will break at this point due to the missing second cluster member)

Wait for the master to show as initialised. Find the instance id of the master and put it into this script. Change the AWSSWITCHES to match any switches you need to provide to the awscli. For me, I specify my profile and region

The script will run and output the status of the instance until it shows “ok”

[code lang=shell] AWSSWITCHES="–profile personal –region eu-west-2" INSTANCEID=master2instanceid while [ “$(aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2)” != “ok” ] do sleep 5s aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2 done aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2 [/code]

ssh into the new master (or via bastion if needed)

[code lang=shell] sudo -i systemctl stop kubelet systemctl stop protokube [/code]

edit /etc/kubernetes/manifests/etcd.manifest and /etc/kubernetes/manifests/etcd-events.manifest Change the ETCD_INITIAL_CLUSTER_STATE value from new to existing Under ETCD_INITIAL_CLUSTER remove the third master definition

Stop the etcd docker containers

[code lang=shell] docker stop $(docker ps | grep “etcd” | awk ‘{print $1}') [/code]

Run this a few times until you get a docker error saying you need more than one container name There are two volumes mounted under /mnt/master-vol-xxxxxxxx, one contains /var/etcd/data-events/member/ and one contains /var/etcd/data/member/ but it varies because of the id.

[code lang=shell] rm -r /mnt/var/master-vol-xxxxxx/var/etcd/data-events/member/ rm -r /mnt/var/master-vol-xxxxxx/var/etcd/data/member/ [/code]

Now start kubelet

[code lang=shell] systemctl start kubelet [/code]

Wait until the master shows on the validate list then start protokube

[code lang=shell] systemctl start protokube [/code]

Now do the same with the third master

edit the third master ig to make it min/max 1

[code lang=shell] kops edit ig master-eu-west-2c –name=clustername –state s3://bucket [/code]

Add it to the clusters (the etcd pods should still be running)

[code lang=shell] ETCPOD=etcd-server-events-ip-10-10-10-226.eu-west-2.compute.internal ETCEVENTSPOD=etcd-server-ip-10-10-10-226.eu-west-2.compute.internal AZ=c CLUSTER=clustername

kubectl –namespace=kube-system exec $ETCPOD – etcdctl member add etcd-$AZ http://etcd-$AZ.internal.$CLUSTER:2380 kubectl –namespace=kube-system exec $ETCEVENTSPOD – etcdctl –endpoint http://127.0.0.1:4002 member add etcd-events-$AZ http://etcd-events-$AZ.internal.$CLUSTER:2381

echo Member Lists kubectl –namespace=kube-system exec $ETCPOD – etcdctl member list kubectl –namespace=kube-system exec $ETCEVENTSPOD – etcdctl –endpoint http://127.0.0.1:4002 member list

[/code]

Start the third master

[code lang=shell] kops update cluster –name=cluster-name –state=s3://bucket [/code]

Wait for the master to show as initialised. Find the instance id of the master and put it into this script. Change the AWSSWITCHES to match any switches you need to provide to the awscli. For me, I specify my profile and region

The script will run and output the status of the instance until it shows “ok”

[code lang=shell] AWSSWITCHES="–profile personal –region eu-west-2" INSTANCEID=master3instanceid while [ “$(aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2)” != “ok” ] do sleep 5s aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2 done aws $AWSSWITCHES ec2 describe-instance-status –instance-id=$INSTANCEID –output text | grep SYSTEMSTATUS | cut -f 2 [/code]

ssh into the new master (or via bastion if needed)

[code lang=shell] sudo -i systemctl stop kubelet systemctl stop protokube [/code]

edit /etc/kubernetes/manifests/etcd.manifest and /etc/kubernetes/manifests/etcd-events.manifest Change the ETCD_INITIAL_CLUSTER_STATE value from new to existing

We DON’T need to remove the third master defintion this time, since this is the third master

Stop the etcd docker containers

[code lang=shell] docker stop $(docker ps | grep “etcd” | awk ‘{print $1}') [/code]

Run this a few times until you get a docker error saying you need more than one container name There are two volumes mounted under /mnt/master-vol-xxxxxxxx, one contains /var/etcd/data-events/member/ and one contains /var/etcd/data/member/ but it varies because of the id.

[code lang=shell] rm -r /mnt/var/master-vol-xxxxxx/var/etcd/data-events/member/ rm -r /mnt/var/master-vol-xxxxxx/var/etcd/data/member/ [/code]

Now start kubelet

[code lang=shell] systemctl start kubelet [/code]

Wait until the master shows on the validate list then start protokube

[code lang=shell] systemctl start protokube [/code]

If the cluster validates, do a full respin

[code lang=shell] kops rolling-update cluster –name clustername –state s3://bucket –force –yes [/code]

Guide to creating a Kubernetes Cluster in existing subnets & VPC on AWS with kops

#

This article is a guide on how to setup a Kubernetes cluster in AWS using kops and plugging it into your own subnets and VPC. We attempt to minimise the external IPs used in this method.

Export your AWS API keys into environment variables

[code lang=text] export AWS_ACCESS_KEY_ID=‘YOUR_KEY’ export AWS_SECRET_ACCESS_KEY=‘YOUR_ACCESS_KEY’ export CLUSTER_NAME=“my-cluster-name” export VPC=“vpc-xxxxxx” export K8SSTATE=“s3-k8sstate”</pre> [/code]

Create the cluster (you can change some of these switches to match your requirements. I would suggest only using one worker node and one master node to begin with and then increase them once you have confirmed the config is good. The more workers and master nodes you have, the longer it will take to run a rolling-update.

kops create cluster --cloud aws --name $CLUSTER_NAME --state s3://$K8SSTATE --node-count 1 --zones eu-west-1a,eu-west-1b,eu-west-1c --node-size t2.micro --master-size t2.micro --master-zoneseu-west-1a,eu-west-1b,eu-west-1c --ssh-public-key ~/.ssh/id_rsa.pub --topology=private --networking=weave --associate-public-ip=false --vpc $VPC

Important note: There must be an ODD number of master zones. If you tell kops to use an even number zones for master, it will complain.

If you want to use additional security groups, don’t add them yet – add them after you have confirmed the cluster is working.

Internal IPs: You must have a VPN connection into your VPC or you will not be able to ssh into the instances. The alternative is to use the bastion functionality using the –bastion flag with the create command. Then doing this:

ssh -i ~/.ssh/id_rsa -o ProxyCommand='ssh -W %h:%p admin@bastion.$CLUSTER_NAME' admin@INTERNAL_MASTER_IP

However, if you do this method, you MUST then use public IP addressing on the api load balancer, as you will not be able to do kops validate otherwise.

Edit the cluster

kops edit cluster $CLUSTER_NAME --state=s3://$K8SSTATE

Make the following changes:

If you have a VPN connection into the VPC, change spec.api.loadBalancer.type to “Internal”, otherwise, leave it as “Public” Change spec.subnets to match your private subnets. To use existing private subnets, they should also include the id of the subnet and match the CIDR range, e.g.:

[code lang=text] subnets:

The utility subnet is where the Bastion hosts will be placed, and these should be in a public subnet, since they will be the inbound route into the cluster from the internet.

If you need to change or add specific IAM permissions, add them under spec.additionalPolicies like this to add additional policies to the node IAM policy (apologies about the formatting. WordPress is doing something weird to it.)

[code lang=text] additionalPolicies:   node: |     [       {          “Effect”: “Allow”,         “Action”: [“dynamodb:"],         “Resource”: [""]       },        {          “Effect”: “Allow”,          “Action”: [“es:"],              “Resource”: [""]            }        ] [/code]

Edit the bastion, nodes, and master configs (MASTER_REGION is the zone where you placed the master. If you are running a multi-region master config, you’ll have to do this for each region)

kops edit ig master-{MASTER_REGION} --name=$CLUSTER_NAME --state s3://$K8SSTATE

kops edit ig nodes --name=$CLUSTER_NAME --state s3://$K8SSTATE
kops edit ig bastions --name=$CLUSTER_NAME --state s3://$K8SSTATE

Check and make any updates.

If you want a mixture of instance types (e.g. t2.mediums and r3.larges), you’ll need to separate these using new instance groups ($SUBNETS is the subnets where you want the nodes to appear – for example, you can provide a list “eu-west-2a,eu-west-2b)

kops create ig anothernodegroup --state s3://$K8SSTATE --subnets $SUBNETS

You can later delete this with

kops delete ig anothernodegroup --state s3://$K8SSTATE

If you want to use spot prices, add this under the spec section (x.xx is the price you want to bid):

maxPrice: "x.xx"

Check the instance size and count if you want to change them (I would recommend not changing the node count just yet)

If you want to add tags to the instances (for example for billing), add something like this to the spec section:

[code lang=text] cloudLabels: Billing: product-team</pre> [/code]

If you want to run some script(s) at node startup (cloud-init), add them to spec.additionalUserData:

[code lang=text] spec: additionalUserData:

Apply the update:

kops update cluster $CLUSTER_NAME --state s3://$K8SSTATE --yes

Wait for DNS to propagate and then validate

kops validate cluster --state s3://$K8SSTATE

Once the cluster returns ready, apply the Kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

Access the dashboard via

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

also try:

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

If the first doesn’t work

(ignore the cert error)

Username is “admin” and the password is found from your local ~/.kube/config

Add the External DNS update to allow you to give friendly names to your externally-exposed services rather than the horrible elb names.

See here: https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

(You can apply the yaml directly onto the cluster via the dashboard. Make sure you change the filter to match your domain or subdomain. )

Note that if you use this, you’ll need to change the node IAM policy on the cluster config as the default IAM policy won’t allow the External DNS container to modify Route 53 entries, and also annotate (use kubectl annotate $service_name key:value) your service with text such as:

external-dns.alpha.kubernetes.io/hostname: $SERVICE_NAME.$CLUSTERNAME

And also you might need this annotation, to make the ELB internal rather than public - otherwise Kubernetes will complain “Error creating load balancer (will retry): Failed to ensure load balancer for service namespace/service: could not find any suitable subnets for creating the ELB”

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

(Optional) Add the Cockpit pod to your cluster as described here

http://cockpit-project.org/guide/133/feature-kubernetes.html

It will allow you to visually see a topology of your cluster at a cluster and also provides some management features too. For example, here’s my cluster. It contains 5 nodes (1 master, 4 workers and is running 4 services (Kubernetes, external-dns, cockpit, and dashboard). Cockpit creates a replication controller so it knows about the changes.

chrome_2018-01-14_15-44-00

Add any additional security groups by adding this under the spec section of the node/master/bastions config, then do a rolling-update (you might need to use the –force switch), do this as soon as you can after creating and verifying the cluster updates work.

[code lang=text] additionalSecurityGroups:

If the cluster breaks after this (i.e. the nodes haven’t shown up on the master), reboot the server (don’t terminate, use the reboot option from the AWS console), and see if that helps. If it still doesn’t show up, there’s something wrong with the security groups attached – i.e. they’re conflicting somehow with the Kubernetes access. Remove those groups and then do another rolling-update but use both the –force and –cloudonly switches to force a “dirty” respin.

If the cluster comes up good, then you can change the node counts on the configs and apply the update.

Note that if you change the node count and then apply the update, the cluster attempts to make the update without rolling-update. For example, if you change the node count from 1 to 3, the cluster attempts to bring up the 2 additional nodes.

Other things you can look at:

Kompose - which converts a docker-compose configuration into Kubernetes resources

Finally, have fun!

Tinkering with Kubernetes and AWS

#

 

This article just goes through my tinkering with Kubernetes on AWS.

Create a new S3 bucket to store the state of your Kubernetes clusters

aws s3 mb s3://k8sstate --region eu-west-2

Verify

aws s3 ls

Create a Route 53 hosted zone. I’m creating k8stest.blenderfox.uk

aws route53 create-hosted-zone --name k8stest.blenderfox.uk \
--caller-reference $(uuidgen)

dig the nameservers for the hosted zone you created

dig NS k8stest.blenderfox.uk

If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:

;; QUESTION SECTION:
;k8stest.blenderfox.uk.     IN  NS

;; ANSWER SECTION:
k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org.
k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk.
k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com.
k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.

 

Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles

export AWS_ACCESS_KEY_ID='your key here'
export AWS_SECRET_ACCESS_KEY='your secret access key here'

You can also add it to a bash script and source it.

Create the cluster using kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here

kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \
--state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \
--node-size m4.large --master-size m4.large \
--master-zones eu-west-2a \
--ssh-public-key ~/.ssh/id_rsa.pub \
--master-volume-size 50 \
--node-volume-size 50 \
--topology private

You can also add the –kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at

https://github.com/kubernetes/kops/blob/master/channels/stable

TL;DL: Bands are:

Each with their own Debian image.

 

Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud

kops update cluster cluster.k8stest.blenderfox.uk --yes \
--state s3://k8sstate

While the cluster starts up, all the new records will be set up with placeholder IPs.

Selection_004

NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.

The cluster can take a while to come up. Use

kops validate cluster –state s3://k8sstate

To check the cluster state.

When ready, you’ll see something like this:

Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk

Validating cluster cluster.k8stest.blenderfox.co.uk

INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-2a       Master  m4.large        1       1       eu-west-2a
nodes                   Node    m4.large        3       3       eu-west-2a,eu-west-2b

NODE STATUS
NAME                                            ROLE    READY
ip-172-20-35-51.eu-west-2.compute.internal      master  True
ip-172-20-49-10.eu-west-2.compute.internal  node    True
ip-172-20-72-100.eu-west-2.compute.internal     node    True
ip-172-20-91-236.eu-west-2.compute.internal     node    True

Your cluster cluster.k8stest.blenderfox.co.uk is ready

Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
 serviceaccount "kubernetes-dashboard" created
 role "kubernetes-dashboard-minimal" created
 rolebinding "kubernetes-dashboard-minimal" created
 deployment "kubernetes-dashboard" created
 service "kubernetes-dashboard" created

Now setup a proxy to the api

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Next, access

http://localhost:8001/ui

To get the dashboard

Now let’s create a job to deploy on to the cluster.

Uploading (and Resuming) videos to YouTube via GoogleCL and AWS - Updated 15th Dec

#

If you are like me, and have a slow and/or unreliable internet connection, trying to upload any reasonably-sized video to YouTube can be a nightmare, forcing you to have your computer on for hours on end, and then finding your upload failed because your connection dropped, and then having to start all over again.

Well, one way to have resume protection is to use a middle-point, which is Amazon Web Services, or a similar cloud-based provider, then using that to upload to YouTube. Since the connection between the cloud system and YouTube is likely to be more reliable (and faster) than your connection, the upload from the cloud system to YouTube will be faster.

The first step is to setup and start an instance on AWS. I am using the Ubuntu image.

SSH into the instance and install supporting packages via apt-get or aptitude. Make sure you change the IP (xx.xx.xx.xx) and the key (AWS_Ireland.pem) to match your files.

$ ssh -o IdentityFile=/home/user/.ssh/AWS_Ireland.pem ubuntu@xx.xx.xx.xx

$ sudo apt-get install python-gdata python-support rsync

Then download the latest googlecl deb file from https://code.google.com/p/googlecl/downloads/list

$ wget [googlecl.googlecode.com/files/goo...](https://googlecl.googlecode.com/files/googlecl_0.9.14-2_all.deb)

Now, install the deb file using dpkg

$ sudo dpkg -i googlecl_0.9.14-2_all.deb

We can now start using the Google services, but first we need to authenticate. This is normally done via a browser, but since we are in a terminal, we skip this.

$ google youtube list
Please specify user: [enter your email address here]

You will see a text-version of the login page. Don’t bother entering your values. Just press ‘q’ to quit and confirm exit. Then, you’ll see in the terminal window, a url along the lines of this:

Please log in and/or grant access via your browser at:
[www.google.com/accounts/...](https://www.google.com/accounts/OAuthAuthorizeToken?oauth_token=){hidden}&hd=default

Go to that url and sign in. Then, come back to the console and press enter. If all goes well, you should see your video uploads in the console window.

Now, to upload a video to the AWS instance. You can use rsync for that, and the command to enter into your local terminal is as follows (change the key file to match yours and the IP address field to match your instance’s IP):

rsync -vhPz --compress-level=9 -e "ssh -o IdentityFile=/home/user/.ssh/AWS_Ireland.pem" source ubuntu@{EC2_IP}:.

This uploads the video called “source” onto your EC2 instance at the home folder of the default user (if you have another location in your instance, use that here). Rsync will allow you to resume uploads via the P switch. When the rsync command successfully completes, you can then SSH back onto the instance, and use the “google youtube post” command to upload your video onto YouTube.

NOTE: On some large files, rsync breaks on resuming with the error message “broken pipe”, if this happens to you, see this page (specifically, Q3).

Once your video is uploaded to your EC2 instance, you can then upload that video to YouTube by using this:

$ google youtube post path/to/video