Blender Fox


Training in Quarantine - Day 302

#

Did my walk earlier today -- I've developed a headache (one of the common symptoms of the jab) and it's been making it difficult to concentrate.

The walk helped, but my jabbed arm is aching again.

Still have a bit of nausea, but no diarrhoea or vomiting.

Plus it looks like it's about to rain..

Covid Jab

#

I have now finally had my first covid jab. It took a while queuing outside the centre before I could get in and go through all the checks and stuff.

I got the AZ vaccine

I finally got my jab in my right arm (I'm left handed) and it immediately started to sting. I did not need to wait 15 minutes as that was only required if you were driving. I took the bus in.

Training in Quarantine - Day 301

#

Moderately cold today, but comfortable to walk in. Did some shopping for more dairy milk on the way home.

Also, I finally got the notification for my covid jab, and I managed to book in for tomorrow and the second jab in July.

In other news, my Curve card had some fraudulent transactions against it which I have been talking to them about today. The transaction is now reversed, and also on Monzo (on which the transaction went to), so we are now even, but to be safe, I blocked and have requested new cards for both.

Training in Quarantine - Day 300

#

Colder today than yesterday, so had my hands in my pockets most of the way, but when I walked in the sun, it felt good.

Walked past the house whose fence caught fire recently and they've now put up a new fence. The charred remains of the tress that got burnt have been trimmed down to the trunk roots.

Training in Quarantine - Day 299

#

Today was cooler than yesterday, still okay to walk with running jacket. Did some shopping for dairy milk while doing my walk and headed back afterwards.

Training in Quarantine - Day 298

#

Another nice warm day, but roadworks are bloody EVERYWHERE. The past week it has been roadworks up the road from me, and then this weekend are more road works, jamming up my local shops.

Training in Quarantine - Day 297

#

A nice warm day. Did my walk with my running jacket on, but possibly didn't need it.

Training in Quarantine - Day 296

#

More comfortable walk today, wasn't as hot so back to my normal coat, sneezing a little, but eased up as the morning progressed.

Training in Quarantine - Day 294 & 295

#

Yesterday was a nice hot day, but my hay fever came on with a vengeance, and I mean vengeance -- my eyes were streaming, my nose would not stop running and I was sneezing almost the entire day, well into the evening. I spent a large part of the day with tissues jammed up my nose in an attempt of relief.

Surprisingly, going out for a walk actually felt better and not much sneezing then.

Today, the weather was hot again, so my dad and I decided to vac out the car, then I did my walk.

Training in Quarantine - Day 293

#

Things have started changing now. Pubs have reopened (although you can only eat outside), barbers are now open again, and some cafes and shops have decided to reopen.

Walking today seemed like life has started to return to the area.

Training in Quarantine - Day 292

#

Late posting of this as I forgot to post it earlier today

Did my afternoon walk, skies were cloudy and it did scatter with rain earlier, but was safe. Temperature was cold, hat and gloves weather.

Training in Quarantine - Day 291

#

Been lacking a few days of logging, but here's a rough update.

Temperature has dropped back down to woolly hat and gloves weather and been checking that torched fence a few times. Saw a few alcohol bottle fragments in the hedge. Don't know if they were there before the fire, so arson is a possibility, which would make sense sense a hedge doesn't just suddenly catch fire...

Training in Quarantine - Day 290

#

Did two sets of walking today. Yesterday afternoon, there was a large cloud of smoke nearby and I figured something was on fire. It was too much smoke for a BBQ and too much for a bonfire. My parents who were out walking after me, said they saw something but couldn't get near as everyone was being turned away from a house, presumably where the problem was.

This morning, as I went to shopping, I stopped past where they told me the fire was, but couldn't find any torched houses.

When I came back after shopping and light was better, I decided to walk the area to try to find it. After 45 minutes walk, I still couldn't find any torched houses.

My dad went out walking after me, and did find it. Turns out the house wasn't the one that caught fire, but the hedge outside it.

Looks like someone must have tossed a lit cigarette into the hedge and it went up in flames. The fence inside the hedge also caught light, but luckily it didn't spread too far. It was contained enough that I totally missed it even walking past it.

Training in Quarantine - Day 289

#

Cooler day in contrast to the other day, but hay fever seems to be back with a vengeance and I'v been having itchy eyes, sneezing and runny nose all day...

Training in Quarantine - Day 288

#

Another sunny day, but the sun went in towards the afternoon and it's a bit cooler now. Did my walk and it wasn't too hot, so nice and comfortable.

Training in Quarantine - Day 287

#

Today was super hot, touching 24degC, no real need for any coats, hats or gloves, but I had to do shopping for milk and needed a pocket for my facemask, so wore my running jacket. Even without zipping out, the walk to the Sainsbury's and back, stopping at the Co-op for the shopping was pretty sweltering.

Training in Quarantine - Day 286

#

Temperature touched 20degC today so makes a change. Lockdown is slightly eased today so more people back on the streets, some shops are back open again and unsurprisingly, social distancing is still not being followed.

Training in Quarantine - Day 285

#

Been raining most of the day, but managed to get my walk done at the usual time in between the showers.

Training in Quarantine - Day 284

#

Past few days are getting warmer, no need for hat and gloves -- but have gone back to using my running gloves to protect against the wind.

Training in Quarantine - Day 283

#

Weekend walk and seemed fairly quiet, and some grey clouds so looks like it's going to rain again soon....

Binding GCP Accounts to GKE Service Accounts with Terraform

#

Kubernetes uses Service Accounts to control who can access what within the cluster, but once a request leaves the cluster, it will use a default account. Normally this is the default Google Compute Engine account in GKE, and this has extremely high level access and could result in a lot of damage if your cluster is compromised.

In this article, I will be setting up a GKE cluster using a minimal access service account and enabling Workflow Identity.

(This post is now also available on Medium)

Workflow Identity will enable you to bind a Kubernetes service account to a service account in GCP. You can then control GCP permissions of that account from within GCP -- no RBAC/ABAC messing about needed (although you will still need to mess with RBAC/ABAC if you want to restrict that service account within Kubernetes, but that's a separate article.)

What you will need for this tutorial:


We will start by setting up our Terraform provider

variable "project" {
  default = "REPLACE_ME"
}

variable "region" {
  default = "europe-west2"
}

variable "zone" {
  default = "europe-west2-a"
}

provider "google" {
  project     = var.project
  region      = var.region
  zone        = var.zone
  credentials = file("credentials.json")
}

We define three variables here that we can reuse later -- the project, region and zone. These variables you can adjust to match your own setup.

The provider block (provider "google" {..}) references those variables and also refers to the credentials.json file that will be used to create the resources in your account.

Next we create the service account that we will bind to the cluster. This service account should contain minimal permissions as it will be the default account used by requests leaving the cluster. Only give it what is essential. You will notice I do not bind it to any roles.

resource "google_service_account" "cluster-serviceaccount" {
  account_id   = "cluster-serviceaccount"
  display_name = "Service Account For Terraform To Make GKE Cluster"
}

Now let's define our cluster and node pool. This block can vary wildly on your circumstances, but I'll use a Kubernetes 1.16 single-zone cluster, with a e2-medium node size and have autoscaling enabled

variable "cluster_version" {
  default = "1.16"
}

resource "google_container_cluster" "cluster" {
  name               = "tutorial"
  location           = var.zone
  min_master_version = var.cluster_version
  project            = var.project

  lifecycle {
    ignore_changes = [
      # Ignore changes to min-master-version as that gets changed
      # after deployment to minimum precise version Google has
      min_master_version,
    ]
  }

  # We can't create a cluster with no node pool defined, but
  # we want to only use separately managed node pools. So we
  # create the smallest possible default node pool and
  # immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

Let's go through a few things on the above block:

variable "cluster_version" {
  default = "1.16"
}

Defines a variable we will use to describe the version of Kubernetes we want on the master and worker nodes.

resource "google_container_cluster" "cluster" {
  ...
  min_master_version = var.cluster_version
  ...
  lifecycle {
    ignore_changes = [
      min_master_version,
    ]
  }
  ...
}

The ignore_changes block here tells terraform not to pay attention to changes in the min_master_version field. This is because even though we declare we wanted 1.16 as the version, GKE will put a Kubernetes variant of 1.16 onto the cluster. For example, the cluster might be created with version 1.16.9-gke.999 -- which is different to what Terraform expects, so if you were to run Terraform again, it would attempt to change the cluster version from 1.16.9-gke.999 to 1.16, cycling through the nodes again.

Next block to discuss:

resource "google_container_cluster" "cluster" {
  ...
  remove_default_node_pool = true
  initial_node_count       = 1
  ...
}

A GKE cluster must be created with a node pool. However it is easier to manage node pool separately, so this block tells Terraform to delete the default node pool when the cluster is created.

Final part of this block:

resource "google_container_cluster" "cluster" {
  ...
  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

This enables Workload Identity and the namespace must be of the format {project}.svc.id.goog

Now let's move onto the Node Pool definition:

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and 
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

Let's go over a couple of blocks again:

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }
 ...
}

This sets up autoscaling with a starting node count of 1 and max node count of 5. Unlike with EKS, you don't need deploy the autoscaler into the cluster. Enabling this will natively allow Kubernetes to scale nodes up or down. The downside is you don't see as many messages compared to the deployed version, so it's sometimes harder to debug why a pod isn't triggering a scaleup.

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and
    # permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }
  }
  ...
}

Here we define the node config, we've got this set as a pool of pre-emptible nodes, of type e2-medium. We tie the nodes to the service account defined earlier and give it only the cloud-platform scope.

The metadata block is needed as if you don't specify it, the value disable-legacy-endpoints = "true" is assumed to be applied, and will cause the node pool to be respun each time you run terraform, as it thinks it need to apply the updated config to the pool.

resource "google_container_node_pool" "primary_preemptible_nodes" {
  ...
  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }
}

Similar to the version field on the master node, we tell Terraform to ignore some fields if they have changed.

version we ignore for the same reason as on the master node -- the version deployed will be slightly different to the one we declared.
initial_node_count we ignore because if the node pool has scaled up, not ignoring this will cause terraform to attempt to scale the nodes back down to the initial_node_count value, causing pods to be sent into Pending
node_count we ignore for pretty much the same reason -- it will likely never be the initial value on a production system due to scale up.


With the basic skeleton setup, we can run Terraform to setup the stack. Yes we haven't actually bound anything to serviceaccounts, but that will come later.

Let's Terraform the infrastructure:

terraform init
terraform plan -out tfplan
terraform apply tfplan

Creation of the cluster can take between 5-15 minutes

Next, we need to get credentials and link into the cluster

gcloud beta container clusters get-credentials tutorial --zone {cluster-zone} --project {project}

or

gcloud beta container clusters get-credentials tutorial --region {cluster-region} --project {project}

You should get some output like this:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for tutorial.

Now you should be able to run kubectl get pods --all-namespaces to see what's in your cluster (should be nothing other than the default system pods)

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE
kube-system   event-exporter-gke-666b7ffbf7-lw79x                              2/2     Running   0          13m
kube-system   fluentd-gke-scaler-54796dcbf7-6xnsg                              1/1     Running   0          13m
kube-system   fluentd-gke-skmsq                                                2/2     Running   0          4m23s
kube-system   gke-metadata-server-fsxj6                                        1/1     Running   0          9m29s
kube-system   gke-metrics-agent-pfdbp                                          1/1     Running   0          9m29s
kube-system   kube-dns-66d6b7c877-wk2nt                                        4/4     Running   0          13m
kube-system   kube-dns-autoscaler-645f7d66cf-spz4c                             1/1     Running   0          13m
kube-system   kube-proxy-gke-tutorial-tutorial-cluster-node-po-b531f1ee-8kpj   1/1     Running   0          9m29s
kube-system   l7-default-backend-678889f899-q6gsl                              1/1     Running   0          13m
kube-system   metrics-server-v0.3.6-64655c969-2lz6v                            2/2     Running   3          13m
kube-system   netd-7xttc                                                       1/1     Running   0          9m29s
kube-system   prometheus-to-sd-w9cwr                                           1/1     Running   0          9m29s
kube-system   stackdriver-metadata-agent-cluster-level-566c4b7cf9-7wmhr        2/2     Running   0          4m23s

Now let's do our first test. We'll use gsutil to run a list of GS buckets on our project.

kubectl run --rm -it test --image gcr.io/cloud-builders/gsutil ls

This will run a docker image with gsutil in it and then remove the container when the command finishes.

The output should be something like this:

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 Caller does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-68bb69b777-5nzgt -c test -i -t' command when the pod is running
deployment.apps "test" deleted

As you can see, we get a 403. The default service account doesn't have permissions to access Google Storage.

Now let's setup the service account we will use for binding:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

Again, let's go through the blocks:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

This block defines the service account in GCP that will be binding to.

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

This block assigns the Storage Admin role to the service account we just created -- essentially it is putting the service account in the Storage Admin group. Think of it more like adding the account to a group rather than assigning a permission or role to the account.

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

This block adds the service account as a Workload Identity User. You'll notice that the member field is a bit confusing. The ${var.project}.svc.id.goog bit indicates that it is a Workflow Identity namespace and the bit in [...] is the name of the Kubernetes service account we want to allow to be bound to this. This membership and an annotation on the service account (described below) will allow the service account in Kubernetes to essentially impersonate the service account in GCP and you will see this in the example.


With the service account setup in Terraform, let's run the Terraform apply steps again

terraform plan -out tfplan
terraform apply tfplan

Assuming it didn't error, we now have one half of the binding -- the GCP service account. We now need to create the service account inside Kubernetes.

You'll recall that we had a piece of data in the [...]: workload-identity-test/workload-identity-user this is our service account that we need to create. Below is the yaml for creating the namespace and the service account. Save this into the file workload-identity-user.yaml:

apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: workload-identity-test
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    iam.gke.io/gcp-service-account: workload-identity-tutorial@{project}.iam.gserviceaccount.com
  name: workload-identity-user
  namespace: workload-identity-test

The important thing to note is the annotation on the service account:

  annotations:
    iam.gke.io/gcp-service-account: workload-identity-tutorial@{project}.iam.gserviceaccount.com

The annotation references the service account created by the Terraform block:

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

So the Kubernetes service account references the GCP service account and the GCP service references the Kubernetes service account.

Important Note: If you do not do the double referencing -- for example, if you forget to include the annotation on the service account or forget to put the referenced Kubernetes service account in the Workload Identity member block, then GKE will use the default service account specified on the node.


Now it's time to put it to the test. If everything is setup correct, run the previous test again:

kubectl run --rm -it test --image gcr.io/cloud-builders/gsutil ls

You should still get the a 403 but with a different error message.

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 Primary: /namespaces/{project}.svc.id.goog with additional claims does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-68bb69b777-8ltvc -c test -i -t' command when the pod is running
deployment.apps "test" deleted

Let's now create the service accounts. This file should have been created by the earlier step:

$ kubectl apply -f workload-identity-test.yaml
namespace/workload-identity-test created
serviceaccount/workload-identity-user created


So now let's run the test again but this time, we specify the service account and also the namespace as a service account is tied to the namespace it resides in — in this case, the namespace of our service account is workload-identity-test

kubectl run -n workload-identity-test --rm --serviceaccount=workload-identity-user -it test --image gcr.io/cloud-builders/gsutil ls

The output will show the buckets you have:

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
gs://backups/
gs://snapshots/
Session ended, resume using 'kubectl attach test-66754998f-sp79b -c test -i -t' command when the pod is running
deployment.apps "test" deleted

NOTE: If you're running a later version of Kubernetes or kubectl, you may get the following error:

Flag --serviceaccount has been deprecated, has no effect and will be removed in 1.24.

In that case, you need to instead use the --overrides switch:

kubectl run -it --rm -n workload-identity-test test --overrides='{ "apiVersion": "v1", "spec": { "serviceAccount": "workload-identity-test" } }' --image gcr.io/cloud-builders/gsutil ls

Let's now change the permissions on the GCP service account to prove it's the one being used change this block:

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin"
  # role   = "roles/storage.objectAdmin"
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

And change the active role like so:

resource "google_project_iam_member" "storage-role" {
  # role = "roles/storage.admin"        ## <-- comment this out
  role   = "roles/storage.objectAdmin"  ## <-- uncomment this
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

Run the terraform actions again:

terraform plan -out tfplan
terraform apply tfplan

Allow a few minutes for the change to propagate then run the test again:

kubectl run -n workload-identity-test --rm --serviceaccount=workload-identity-user -it test --image gcr.io/cloud-builders/gsutil ls

(See earlier if you get an error regarding the serviceaccount switch)

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
AccessDeniedException: 403 workload-identity-tutorial@{project}.iam.gserviceaccount.com does not have storage.buckets.list access to the Google Cloud project.
Session ended, resume using 'kubectl attach test-66754998f-k5dm5 -c test -i -t' command when the pod is running
deployment.apps "test" deleted

And there you have it, the service account in the cluster: workload-identity-test/workload-identity-user is bound to the service account workload-identity-tutorial@{project}.iam.gserviceaccount.com on GCP, carrying the permissions it also has.

If the service account on Kubernetes is compromised in some way, you just need to revoke the permissions on the GCP service account and the Kubernetes service account no longer has any permissions to do anything in GCP.


For simplicity, here's the Terraform used for this tutorial. Replace what you need -- you can move things around and separate into other Terraform files if you wish -- I kept it in one file for simplicity.

variable "project" {
  default = "REPLACE_ME"
}

variable "region" {
  default = "europe-west2"
}

variable "zone" {
  default = "europe-west2-a"
}

provider "google" {
  project     = var.project
  region      = var.region
  zone        = var.zone
  credentials = file("credentials.json")
}

resource "google_service_account" "cluster-serviceaccount" {
  account_id   = "cluster-serviceaccount"
  display_name = "Service Account For Terraform To Make GKE Cluster"
}

variable "cluster_version" {
  default = "1.16"
}

resource "google_container_cluster" "cluster" {
  name               = "tutorial"
  location           = var.zone
  min_master_version = var.cluster_version
  project            = var.project

  lifecycle {
    ignore_changes = [
      # Ignore changes to min-master-version as that gets changed
      # after deployment to minimum precise version Google has
      min_master_version,
    ]
  }

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  workload_identity_config {
    identity_namespace = "${var.project}.svc.id.goog"
  }
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "tutorial-cluster-node-pool"
  location   = var.zone
  project    = var.project
  cluster    = google_container_cluster.cluster.name
  node_count = 1
  autoscaling {
    min_node_count = 1
    max_node_count = 5
  }

  version = var.cluster_version

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope
    # and permissions granted via IAM Roles.
    service_account = google_service_account.cluster-serviceaccount.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }

  }

  lifecycle {
    ignore_changes = [
      # Ignore changes to node_count, initial_node_count and version
      # otherwise node pool will be recreated if there is drift between what 
      # terraform expects and what it sees
      initial_node_count,
      node_count,
      version
    ]
  }

}

resource "google_service_account" "workload-identity-user-sa" {
  account_id   = "workload-identity-tutorial"
  display_name = "Service Account For Workload Identity"
}

resource "google_project_iam_member" "storage-role" {
  role = "roles/storage.admin" 
  # role   = "roles/storage.objectAdmin" 
  member = "serviceAccount:${google_service_account.workload-identity-user-sa.email}"
}

resource "google_project_iam_member" "workload_identity-role" {
  role   = "roles/iam.workloadIdentityUser"
  member = "serviceAccount:${var.project}.svc.id.goog[workload-identity-test/workload-identity-user]"
}

Training in Quarantine - Day 282

#

Did my walk at normal time, and there seems to be more schoolkids out today. Perhaps more schoolkids are back at school now?

Training in Quarantine - Day 281

#

A nice walking weather, but my phone decided to reboot on me while I was walking, messing up my train of thought and pace.

Training in Quarantine - Day 280

#

Late walk as I had a meeting that lasted an hour past my normal end time, and it looks like it's about to pour with rain...

Training in Quarantine - Day 279

#

Okay walking day, no need for hat and gloves, but rainy still