Blender Fox


Google Domains and Terraform

#

Two major updates recently

Firstly, as suspected, I finally got the notification saying that Google Domains' registrations would be acquired by SquareSpace so all my registrations would be transferred over them should I not do anything. Obviously, I didn't want that, so I transferred them over to AWS (Route 53) so now my domains are registered with AWS, but are DNS managed by Cloud DNS. I had some weirdness when trying to migrate all my domains in bulk, with respect to the auth code not being accepted on one of the domains when doing the bulk migration, but it was accepted when I did the migration on that one domain alone, so... go figure.

Next update is Terraform. In case you didn't know, Hashicorp has changed the Terraform license and essentially made it no longer open source. This behaviour is similar to what Red Hat did with its RHEL offering and the backlash is just as bad.

Immediately I knew someone would fork it, and already, there's the OpenTF Initiative and this is the key part:

Our request to HashiCorp: switch Terraform back to an open source license.

We ask HashiCorp to do the right thing by the community: instead of going forward with the BUSL license change, switch Terraform back to a truly open source license, and commit to keeping it that way forever going forward. That way, instead of fracturing the community, we end up with a single, impartial, reliable home for Terraform where the whole community can unite to keep building this amazing ecosystem.

Our fallback plan: fork Terraform into a foundation.

If HashiCorp is unwilling to switch Terraform back to an open source license, we propose to fork the legacy MPL-licensed Terraform and maintain the fork in the foundation. This is similar to how Linux and Kubernetes are managed by foundations (the Linux Foundation and the Cloud Native Computing Foundation, respectively), which are run by multiple companies, ensuring the tool stays truly open source and neutral, and not at the whim of any one company.

OpenTF Initiative (https://opentf.org/)

Essentially, make Terraform open source again, or a fork from the MPL version will be made and maintained separately from Hashicorp's version. This will essentially lead to two, potentially diverging versions of Terraform, one BUSL and one MPL licensed

I'm already looking at alternatives and the two currently that I'm looking at are Ansible and Pulumi

Ansible I've had experience in , but there's two main issues with it:

Pulumi I've heard lots of good things about, but its a new technology, and I don't know if it is can "Import" existing infrastructure.

Guess a "spike" is worth doing for it.

Updates

#

It's been quite a long time since I did any updates on this blog so a couple of updates are in order

House

I've now been living in the new place for just over a year. Generally everything is good, we've started putting in a lawn and currently letting it get its roots in before we try to cut it

Twitter

Oh, boy, what an absolute train wreck. I've tolerated Elon's presence at Twitter because most of the stuff he did wasn't far off what Jack was doing prior. But cutting off all third party clients, forcing everyone onto the new TweetDeck (which likely will be paywalled too) has literally driven users away. Twitter has been losing users and revenue constantly, and it is totally unsurprising.

I've disabled my two TweetDeck profiles (a professional one, and a casual one) from Ferdium. But I doubt I will be going back any time soon.

Red Hat

Another train wreck of a situation. Red Hat's decision to first kill CentOS's stability, then for RHEL sources to only be accessible behind a subscription has pissed of a lot of users, even if it's not strictly speaking against license.

The only one left in its sights will be Fedora so that leads me onto the next update

Manjaro & Archlinux

I've been tinkering with Manjaro more and more lately, with its rolling release schedule meaning I never need to upgrade from a major version to another major version.

It's downside I'm finding is that some packages, especially those in the AUR are essentially "compile from source" packages which does the build on your machine during the install. This can take a varying amount of time depending on the code. With my RSS reader of choice: QuiteRSS, this build takes a mind numbing 2.5 hours to do, even on a high spec machine.

That's where I found out about setting up your own Arch repo. I've been tinkering with that, setting it up on GCP and fronted by a CDN. This works pretty well, but I still need to find how I can do scheduled builds to keep that up to date, but it looks like I'll be switching to Manjaro at some point in the near future. Sound works fine, using lyncolnmd's work.

Another downside with Manjaro, however is that its btrfs filesystem, my home directory backup, and CloneZilla don't seem to want to work well together

Wordpress & Domains

One final update, I will likely be transferring out the blenderfox.com domain out of Wordpress, while I had this registered as part of the blog, I'm finding it much harder to maintain this domain using Wordpress's very limited DNS management tools. I will likely transfer it out to Google Domains, even though there's talk of them shuttering that service. Secondary service would be AWS.

Poor Article Wording

#

This article popped up on my Google News feed and the first thing that caught my eye was the fact the headline mentions they were sacked, but the subheading says "affected due to layoffs"

Laying someone off is not the same as sacking them. As someone at work explained: sacking someone is when you keep the role but do away with the person; layoff is when you do away with the role, but (sometimes) keep the person. These workers were laid off since they also got severance pay. Something you'd never get if you were fired.

In fact, this article may get the writer and the publication in trouble. Being fired has a far more negative impact on your career than being laid off so anyone of those 140 workers trying to get jobs elsewhere may find it harder to get a new job if their prospective employers do a basic internet search and find this article that implies (incorrectly) that they got sacked.

[www.indiatoday.in/technolog...](https://www.indiatoday.in/technology/news/story/github-sacks-entire-india-engineering-team-around-140-of-them-2352591-2023-03-28)

Binding GCP Accounts to GKE Service Accounts with Terraform

#

Kubernetes uses Service Accounts to control who can access what within the cluster, but once a request leaves the cluster, it will use a default account. Normally this is the default Google Compute Engine account in GKE, and this has extremely high level access and could result in a lot of damage if your cluster is compromised.

In this article, I will be setting up a GKE cluster using a minimal access service account and enabling Workflow Identity.

(This post is now also available on Medium)

Workflow Identity will enable you to bind a Kubernetes service account to a service account in GCP. You can then control GCP permissions of that account from within GCP -- no RBAC/ABAC messing about needed (although you will still need to mess with RBAC/ABAC if you want to restrict that service account within Kubernetes, but that's a separate article.)

What you will need for this tutorial:


We will start by setting up our Terraform provider

variable "project" {
  default = "REPLACE_ME"
}

variable "region" {
  default = "europe-west2"
}

variable "zone" {
  default = "europe-west2-a"
}

provider "google" {
  project     = var.project
  region      = var.region
  zone        = var.zone
  credentials = file("credentials.json")
}

We define three variables here that we can reuse later -- the project, region and zone. These variables you can adjust to match your own setup.

The provider block (provider "google" {..}) references those variables and also refers to the credentials.json file that will be used to create the resources in your account.

Next we create the service account that we will bind to the cluster. This service account should contain minimal permissions as it will be the default account used by requests leaving the cluster. Only give it what is essential. You will notice I do not bind it to any roles.

resource "google_service_account" "cluster-serviceaccount" {
  account_id   = "cluster-serviceaccount"
  display_name = "Service Account For Terraform To Make GKE Cluster"
}

Now let's define our cluster and node pool. This block can vary wildly on your circumstances, but I'll use a Kubernetes 1.16 single-zone cluster, with a e2-medium node size and have autoscaling enabled

variable "cluster_version" {
  default = "1.16"
}

resource "google_container_cluster" "cluster" {
  name               = "tutorial"
  location           = var.zone
  min_master_version = var.cluster_version
  project            = var.project

  lifecycle {
    ignore_changes = [
      # Ignore changes to min-master-version as that gets changed
      # after deployment to minimum precise version Google has
      min_master_version,
    ]
  }

  # We can't create a cluster with no node pool defined, but
  # we want to only use separately managed node pools. So we
  # create the smallest possible default node pool and
  # immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

Let's go through a few things on the above block:

variable "cluster_version" {
  default = "1.16"
}

Defines a variable we will use to describe the version of Kubernetes we want on the master and worker nodes.

resource "google_container_cluster" "cluster" {
  ...
  min_master_version = var.cluster_version
  ...
  lifecycle {
    ignore_changes = [
      min_master_version,
    ]
  }
  ...
}

The ignore_changes block here tells terraform not to pay attention to changes in the min_master_version field. This is because even though we declare we wanted 1.16 as the version, GKE will put a Kubernetes variant of 1.16 onto the cluster. For example, the cluster might be created with version 1.16.9-gke.999 -- which is different to what Terraform expects, so if you were to run Terraform again, it would attempt to change the cluster version from 1.16.9-gke.999 to 1.16, cycling through the nodes again.

Next block to discuss:

resource "google_container_cluster" "cluster" {
  ...
  remove_default_node_pool = true
  initial_node_count       = 1
  ...
}

A GKE cluster must be created with a node pool. However it is easier to manage node pool separately, so this block tells Terraform to delete the default node pool when the cluster is created.

Final part of this block:

resource "google_container_cluster" "cluster" {
  ...
  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

This enables Workload Identity and the namespace must be of the format {project}.svc.id.goog

Now let's move onto the Node Pool definition:

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and 
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

Let's go over a couple of blocks again:

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }
 ...
}

This sets up autoscaling with a starting node count of 1 and max node count of 5. Unlike with EKS, you don't need deploy the autoscaler into the cluster. Enabling this will natively allow Kubernetes to scale nodes up or down. The downside is you don't see as many messages compared to the deployed version, so it's sometimes harder to debug why a pod isn't triggering a scaleup.

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }
  ...
}

Here we define the node config, we've got this set as a pool of pre-emptible nodes, of type e2-medium. We tie the nodes to the service account defined earlier and give it only the cloud-platform scope.

The metadata block is needed as if you don't specify it, the value disable-legacy-endpoints = "true" is assumed to be applied, and will cause the node pool to be respun each time you run terraform, as it thinks it need to apply the updated config to the pool.

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }
}

Similar to the version field on the master node, we tell Terraform to ignore some fields if they have changed.

version we ignore for the same reason as on the master node -- the version deployed will be slightly different to the one we declared.
initial_node_count we ignore because if the node pool has scaled up, not ignoring this will cause terraform to attempt to scale the nodes back down to the initial_node_count value, causing pods to be sent into Pending
node_count we ignore for pretty much the same reason -- it will likely never be the initial value on a production system due to scale up.


With the basic skeleton setup, we can run Terraform to setup the stack. Yes we haven't actually bound anything to serviceaccounts, but that will come later.

Let's Terraform the infrastructure:

terraform init
terraform plan -out tfplan
terraform apply tfplan

Creation of the cluster can take between 5-15 minutes

Next, we need to get credentials and link into the cluster

gcloud beta container clusters get-credentials tutorial --zone {cluster-zone} --project {project}

or

gcloud beta container clusters get-credentials tutorial --region {cluster-region} --project {project}

You should get some output like this:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for tutorial.

Now you should be able to run kubectl get pods --all-namespaces to see what's in your cluster (should be nothing other than the default system pods)

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE
kube-system   event-exporter-gke-666b7ffbf7-lw79x                              2/2     Running   0          13m
kube-system   fluentd-gke-scaler-54796dcbf7-6xnsg                              1/1     Running   0          13m
kube-system   fluentd-gke-skmsq                                                2/2     Running   0          4m23s
kube-system   gke-metadata-server-fsxj6                                        1/1     Running   0          9m29s
kube-system   gke-metrics-agent-pfdbp                                          1/1     Running   0          9m29s
kube-system   kube-dns-66d6b7c877-wk2nt                                        4/4     Running   0          13m
kube-system   kube-dns-autoscaler-645f7d66cf-spz4c                             1/1     Running   0          13m
kube-system   kube-proxy-gke-tutorial-tutorial-cluster-node-po-b531f1ee-8kpj   1/1     Running   0          9m29s
kube-system   l7-default-backend-678889f899-q6gsl                              1/1     Running   0          13m
kube-system   metrics-server-v0.3.6-64655c969-2lz6v                            2/2     Running   3          13m
kube-system   netd-7xttc                                                       1/1     Running   0          9m29s
kube-system   prometheus-to-sd-w9cwr                                           1/1     Running   0          9m29s
kube-system   stackdriver-metadata-agent-cluster-level-566c4b7cf9-7wmhr        2/2     Running   0          4m23s

Now let's do our first test. We'll use gsutil to run a list of GS buckets on our project.

kubectl run --rm -it test --image gcr.io/cloud-builders/gsutil ls

This will run a docker image with gsutil in it and then remove the container when the command finishes.

The output should be something like this:

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 Caller does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-68bb69b777-5nzgt -c test -i -t' command when the pod is running
deployment.apps "test" deleted

As you can see, we get a 403. The default service account doesn't have permissions to access Google Storage.

Now let's setup the service account we will use for binding:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

Again, let's go through the blocks:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

This block defines the service account in GCP that will be binding to.

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

This block assigns the Storage Admin role to the service account we just created -- essentially it is putting the service account in the Storage Admin group. Think of it more like adding the account to a group rather than assigning a permission or role to the account.

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

This block adds the service account as a Workload Identity User. You'll notice that the member field is a bit confusing. The ${var.project}.svc.id.goog bit indicates that it is a Workflow Identity namespace and the bit in [...] is the name of the Kubernetes service account we want to allow to be bound to this. This membership and an annotation on the service account (described below) will allow the service account in Kubernetes to essentially impersonate the service account in GCP and you will see this in the example.


With the service account setup in Terraform, let's run the Terraform apply steps again

terraform plan -out tfplan
terraform apply tfplan

Assuming it didn't error, we now have one half of the binding -- the GCP service account. We now need to create the service account inside Kubernetes.

You'll recall that we had a piece of data in the [...]: workload-identity-test/workload-identity-user this is our service account that we need to create. Below is the yaml for creating the namespace and the service account. Save this into the file workload-identity-user.yaml:

apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: workload-identity-test
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    iam.gke.io/gcp-service-account: workload-identity-tutorial@{project}.iam.gserviceaccount.com
  name: workload-identity-user
  namespace: workload-identity-test

The important thing to note is the annotation on the service account:

  annotations:
    iam.gke.io/gcp-service-account: workload-identity-tutorial@{project}.iam.gserviceaccount.com

The annotation references the service account created by the Terraform block:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

So the Kubernetes service account references the GCP service account and the GCP service references the Kubernetes service account.

Important Note: If you do not do the double referencing -- for example, if you forget to include the annotation on the service account or forget to put the referenced Kubernetes service account in the Workload Identity member block, then GKE will use the default service account specified on the node.


Now it's time to put it to the test. If everything is setup correct, run the previous test again:

kubectl run --rm -it test --image gcr.io/cloud-builders/gsutil ls

You should still get the a 403 but with a different error message.

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 Primary: /namespaces/{project}.svc.id.goog with additional claims does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-68bb69b777-8ltvc -c test -i -t' command when the pod is running
deployment.apps "test" deleted

Let's now create the service accounts. This file should have been created by the earlier step:

$ kubectl apply -f workload-identity-test.yaml
namespace/workload-identity-test created
serviceaccount/workload-identity-user created


So now let's run the test again but this time, we specify the service account and also the namespace as a service account is tied to the namespace it resides in — in this case, the namespace of our service account is workload-identity-test

kubectl run -n workload-identity-test --rm --serviceaccount=workload-identity-user -it test --image gcr.io/cloud-builders/gsutil ls

The output will show the buckets you have:

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
gs://backups/
gs://snapshots/
Session ended, resume using 'kubectl attach test-66754998f-sp79b -c test -i -t' command when the pod is running
deployment.apps "test" deleted

NOTE: If you're running a later version of Kubernetes or kubectl, you may get the following error:

Flag --serviceaccount has been deprecated, has no effect and will be removed in 1.24.

In that case, you need to instead use the --overrides switch:

kubectl run -it --rm -n workload-identity-test test --overrides='{ "apiVersion": "v1", "spec": { "serviceAccount": "workload-identity-test" } }' --image gcr.io/cloud-builders/gsutil ls

Let's now change the permissions on the GCP service account to prove it's the one being used change this block:

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

And change the active role like so:

resource "google_project_iam_member" "storage-role" {
  # role = "roles/storage.admin"        ## <-- comment this out
  role   = "roles/storage.objectAdmin"  ## <-- uncomment this
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

Run the terraform actions again:

terraform plan -out tfplan
terraform apply tfplan

Allow a few minutes for the change to propagate then run the test again:

kubectl run -n workload-identity-test --rm --serviceaccount=workload-identity-user -it test --image gcr.io/cloud-builders/gsutil ls

(See earlier if you get an error regarding the serviceaccount switch)

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 workload-identity-tutorial@{project}.iam.gserviceaccount.com does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-66754998f-k5dm5 -c test -i -t' command when the pod is running
deployment.apps "test" deleted

And there you have it, the service account in the cluster: workload-identity-test/workload-identity-user is bound to the service account workload-identity-tutorial@{project}.iam.gserviceaccount.com on GCP, carrying the permissions it also has.

If the service account on Kubernetes is compromised in some way, you just need to revoke the permissions on the GCP service account and the Kubernetes service account no longer has any permissions to do anything in GCP.


For simplicity, here's the Terraform used for this tutorial. Replace what you need -- you can move things around and separate into other Terraform files if you wish -- I kept it in one file for simplicity.

variable "project" {
  default = "REPLACE_ME"
}

variable "region" {
  default = "europe-west2"
}

variable "zone" {
  default = "europe-west2-a"
}

provider "google" {
  project     = var.project
  region      = var.region
  zone        = var.zone
  credentials = file("credentials.json")
}

resource "google_service_account" "cluster-serviceaccount" {
  account_id   = "cluster-serviceaccount"
  display_name = "Service Account For Terraform To Make GKE Cluster"
}

variable "cluster_version" {
  default = "1.16"
}

resource "google_container_cluster" "cluster" {
  name               = "tutorial"
  location           = var.zone
  min_master_version = var.cluster_version
  project            = var.project

  lifecycle {
    ignore_changes = [
      # Ignore changes to min-master-version as that gets changed
      # after deployment to minimum precise version Google has
      min_master_version,
    ]
  }

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope
    # and permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }

  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin" 
  # role   = "roles/storage.objectAdmin" 
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

Training in Quarantine - Day 179

#

Late out today -- my phone wanted to upgrade so I attempted it (it was an upgrade from Android 9 to Android 10), and it didn't work, and I ended up having to factory reset and install from scratch. I did have some Titanium Backup backups, but they didn't seem to work a lot of the time :/

So for the most part, I just reinstalled all the apps I remember using and logged in. For most, that was fine. But I lost the MFA codes on Google Authenticator, meaning I had to remove and setup:

all over again

AWS was quick and painless after a security check to confirm I was who I said I was and they called me on the number on the account.

Wordpress was painless too -- I was already logged in, so just removed MFA and set it up again, then logged in again. Similarly with LastPass

GitLab however, is proving to be more of a pain. They no longer accept MFA removal requests for people on the Free plan. So I wonder if they will accept me going to a subscription model so I _can_ then request the MFA removal. I think it is better anyway, since I'm hitting the 400 minute CI limit pretty regularly. The 2000 minute CI limit would be better. At least until I can get my own GitLab install working.

As for the run, yes, it was a run -- well, more of a jog, anyway. Still did the 3km lap, doing it in 20 mins rather than the 30 mins it normally takes me when I walk it.

Google ACE Certification

#

I completely forgot to post this, but yes, I did pass the certification :)

Google Cloud Certification

#

My next certification is complete: Google Cloud Certified Associate Cloud Engineer.

I did the exam Saturday, but only posting it now because it requires the results require verification by Google.

I passed (provisionally, at least)...

Google to buy FitBit

#

Well, this is a bit of a surprise, but not too much a surprise.

Regular readers will know I'm a FitBit user and have been for a few years.

You'll also know that I'm an Android user, and Linux user.

So I just read this article, about Google acquiring FitBit. I'm curious to see how they incorporate FitBit and whether improve it or destroy it....

[www.engadget.com/2019/11/0...](https://www.engadget.com/2019/11/01/google-buys-fitbit/)

And a Press Release has just been found in my inbox:

[investor.fitbit.com/press/pre...](https://investor.fitbit.com/press/press-releases/press-release-details/2019/Fitbit-to-Be-Acquired-by-Google/default.aspx)

Google's Catch-22

#

Not often I post on problems at Google, but this is actually an interesting situation.

https://arstechnica.com/?p=1518703

Google had an outage the other week, and it knocked out several websites GitLab, Shopify and impacted others. Gsuite, Gmail, YouTube were affected, but not down.

There are some interesting lines in this article:

for an entire afternoon and into the night, the Internet was stuck in a crippling ouroboros: Google couldn’t fix its cloud, because Google’s cloud was broken.

Google says its engineers were aware of the problem within two minutes. And yet! “Debugging the problem was significantly hampered by failure of tools competing over use of the now-congested network,”

In short, Google Cloud broke due to congestion, Google couldn’t fix the problem because their tools required using the network that was now congested

LPIC-1 Expiry and Google+

#

Well, it was due to happen eventually, but I got an email saying my LPIC-1 certification is going to expire in 9 months, and I never got to finish LPIC-2.

Well, maybe I’ll redo it after I got my Kubernetes certifications

Finally while writing this post, I notice that Wordpress is now removing Google+ support because Google are shutting it down. A pity really, since I did like Google+ and while it didn’t take off, a lot of the features were in G+ because general use, like Hangouts.

Google/HTC deal is official, Google to acquire part of HTC’s smartphone team | Ars Technica

#

So Google has officially hooked up with HTC. How do I feel about this? Rather ambivalent, actually. On one side Google is already using their phones (Pixel), but HTC did roll over to Apple a long time ago without standing up to their bullying tactics - something that made me ditch HTC in favour of Samsung (and, tbh, I’m glad I did). However, this link up means Google gets a dedicated team to work on their phones. Whether this means they’ll become a decent competitor to the other devices, remains to be seen.

Source: Google/HTC deal is official, Google to acquire part of HTC’s smartphone team | Ars Technica

Google Chrome : Hatsune Miku (初音ミク) - YouTube

#
This is an old advert by Google Chrome featuring Hatsune Miku, the Vocaloid virtual singer, following in the same line as Honda's series of adverts featuring her too. Selling the idea that Miku is a virtual singer, but you can be anything else -- Musician, Producer, Composer, etc.
 
(Only found out by some Tweets)
Hatsune Miku, Virtual Singer
Everyone, Creator
 

[embed]www.youtube.com/watch

Remains of the Day: Google Chrome Drops Support for Windows XP

#

With the roll out of a new version of Chrome, Google is saying goodbye to a few old favorites. Maybe “favorites” isn’t the right word. The browser will no longer be updated to support Windows XP, Vista, and OS X 10.8. Goodnight, sweet Vista, and your glossy menus.

RIP XP. Finally. Although I say finally, but I’m pretty sure some places are still using XP because they can’t/won’t recode applications to support Windows 2000

Source: Remains of the Day: Google Chrome Drops Support for Windows XP

Why Spatial Audio Is Such a Big Deal for Google Cardboard | WIRED

#

As someone who’s tried Google Cardboard, I am pretty keen to see this happening.

FOR ALL THERE is to love about Google Cardboard, it’s still a bare-bones experience. It’s barely even VR, really. But the cheap, smartphone-based viewer offers the VR-curious an easy window into 360-degree video. Pricier headsets like the Oculus Rift and Sony PlayStation VR, designed for gaming, deliver more than a cool stereoscopic viewing experience. In addition to the immersive visual eye-candy—users can explore virtual spaces, peek around corners, and, using hand-held controllers, interact with digital objects—these sophisticated VR rigs offer truly lifelike audio.

When a monster sneaks up on your left in a VR game, you’ll hear its slobbering tongue lashing at your left ear. When a shot comes at you from above you and slightly to the right, you know exactly where to return fire. When The Edge tears through the opening riff of “Mysterious Ways,” it reverberates around the stadium.

The high-priced headsets from Oculus, Sony, and HTC pack the processing punch to deliver “spatially oriented” audio experiences that consider direction, distance, and environmental factors when creating the soundtrack. Cardboard, powered by your smartphone, can’t do that yet. But earlier this week, the Cardboard team made it a little easier to give the audio in these apps a bit more realism. Asthis blog post from Google Cardboard product manager Nathan Martz outlines, the Cardboard software development kit for Android and the Unity game engine now support spatial audio. This platform update paves the way for Google Cardboard to become something more than a gateway drug to true VR.

Source: Why Spatial Audio Is Such a Big Deal for Google Cardboard | WIRED

RPHM App

#

Just got an email from the RPHM organisers. There’s an app in the Google Play store that allows you to track the runners (including me) during the race, and also provides split times as they run over the timing mats. For runners, it also shows you where you are on the course at that time. Although I wouldn’t recommend it – you need to make sure you’re not running into the barriers :D

Google Play: royalparksfoundationemail.org.uk/1L97-3PL8…

Google Drive FUSE

#

Found this tool which exposes your Google Drive as a FUSE mount, allowing you to copy to and from your drive as if it was a directory on your desktop. It is slow, though.

github.com/astrada/g…

Installation instructions for Debian (Wheezy):

xmodulo.com/mount-goo…

Google Music Syncing

#

Found this link whilst hunting for suitable methods for syncing my Google Music collection. Seems to work (so far), but it hasn’t finished syncing my music yet.

www.sjnewbs.net

Docker Builds - Update

#

As I relied on grive to do the sync between my local machine and Google Drive, where the builds were stored, I found out (at work, ironically, since we use some Google APIs), that Google shut off some of their APIs on 20th April, which killed some of our functionality and also, killed grive’s functionality with some really cryptic messages in the console window. Nonetheless, I found that an alternative, “drive” works, although a hell of a lot slower.

Google Music

#

A while ago, I posted of my frustration with Google Music when it refused to download my tracks. Well, I did some digging around and found that someone had written an API to expose the Google Music backend. The link is at

github.com/simon-web…

and has spawned several other tools including

github.com/thebigmun…

Which is a set of scripts designed to sync, upload and/or download from the Google Music collection.

I wrote my own Python script using the Gmusic API to bulk delete albums from my Gmusic account (it’s easy to bulk upload using Google’s MusicManager, but not to bulk delete), and the gmusicapi-scripts enables me to download most of my tracks.

Google is voted best place to work in the UK

#

It might be the best place to work, but getting into Google might be another thing. And staying there is a whole new ball game.

Google Music

#

English: Google Logo officially released on Ma...

I am getting pretty peeved with Google recently. I have a huge amount of music on my Google Music library, so much in fact, that I hit Google’s track limit for uploads. Now, I’m trying to download my purchased music back to my machine, but their MusicManager is winding me up no end. It downloads for a while, then stops, thinking it has finished, with several tracks not downloaded. I restart the download, and it goes on a bit more then stop again.

Google suggested a few things, eventually ending up blaming my ISP. But there isn’t much alternative for me. Other than my current ISP, I can only use my corporate connection, but that requires a proxy - something Google do not support on MusicManager, or using Tor, which also doesn’t work properly. They suggested using the Google Music app, but that only works (if it ever does) on a single album.

I even tried using AWS and Google Cloud, but the app ties to MAC and refuses to identify my machine (which is a virtual machine). I also tried using an LXC contain, and that worked for a bit longer, but also died. So now, I’m trying using a Docker image. Slightly different concept, but lets see if it works.

If that doesn’t work, I’m going to try using TAILS.

EDIT: Docker image didn’t work. So anything with a “true” virtual environment such as AWS, GC, and Docker don’t seem to work (VirtualBox will probably be in this list too), anything else (LXC, e.g.) will work, but fail later.

Game (And Software) Release Stages

#

This is a Google Developers episode detailing how best to avoid releasing broken or defective games on Google Play. It speaks of three release stages: Alpha, Beta, and Canary/Staged Rollout. Alpha and Beta, almost all people are aware of. But Canary/Staged Rollout is a new term for me, but makes a lot of sense.

If you develop and/or release software, this is probably worth a watch.

www.youtube.com/watch

What if Google was a guy?

#

Google overtakes Apple as world's most valuable brand - Telegraph

#

 

Google has been named ahead of Apple as the world's most valuable brand, according to a new survey.

Google overtakes Apple as world’s most valuable brand - Telegraph.

Seeds

#

www.youtube.com/watch