First, we create a service account for the cluster. The service account needs to have the following roles:
Roles |
---|
roles/storage.admin |
roles/logging.logWriter |
roles/monitoring.metricWriter |
roles/container.admin |
Run the following commands to create the service account:
GKE_SA=gitpod-gke
GKE_SA_EMAIL="${GKE_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${GKE_SA}" --display-name "${GKE_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/storage.admin"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/logging.logWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/monitoring.metricWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/container.admin"
After that, we create a Kubernetes cluster.
Image Type | UBUNTU_CONTAINERD |
Machine Type | e2-standard-2 |
Cluster Version | Choose latest from regular channel |
Enable | Autoscaling, Autorepair, IP Alias, Network Policy |
Disable | Autoupgrademetadata=disable-legacy-endpoints=true |
Create Subnetwork | gitpod-${CLUSTER_NAME} |
Max Pods per Node | 10 |
Default Max Pods per Node | 110 |
Min Nodes | 0 |
Max Nodes | 1 |
Addons | HorizontalPodAutoscaling, NodeLocalDNS, NetworkPolicy |
Region | Choose your region and zones |
CLUSTER_NAME=gitpod
REGION=us-central1
GKE_VERSION=1.22.12-gke.1200
gcloud container clusters \
create "${CLUSTER_NAME}" \
--disk-type="pd-ssd" --disk-size="50GB" \
--image-type="UBUNTU_CONTAINERD" \
--machine-type="e2-standard-2" \
--cluster-version="${GKE_VERSION}" \
--region="${REGION}" \
--service-account "${GKE_SA_EMAIL}" \
--num-nodes=1 \
--no-enable-basic-auth \
--enable-autoscaling \
--enable-autorepair \
--no-enable-autoupgrade \
--enable-ip-alias \
--enable-network-policy \
--create-subnetwork name="gitpod-${CLUSTER_NAME}" \
--metadata=disable-legacy-endpoints=true \
--max-pods-per-node=110 \
--default-max-pods-per-node=110 \
--min-nodes=0 \
--max-nodes=1 \
--addons=HorizontalPodAutoscaling,NodeLocalDNS,NetworkPolicy
Unfortunately, you cannot create a cluster without the default node pool. Since we need a custom node pool, you need to remove the default one.
gcloud --quiet container node-pools delete default-pool \
--cluster="${CLUSTER_NAME}" --region="${REGION}"
Now, we are creating a node pool for the Gitpod services.
Image Type | UBUNTU_CONTAINERD |
Machine Type | n2d-standard-4 |
Enable | Autoscaling Autorepair IP Alias Network Policy |
Disable | Autoupgrademetadata=disable-legacy-endpoints=true |
Create Subnetwork | gitpod-${CLUSTER_NAME} |
Number of nodes | 1 |
Min Nodes | 1 |
Max Nodes | 50 |
Max Pods per Node | 110 |
Scopes | gke-default ,https://www.googleapis.com/auth/ndev.clouddns.readwrite |
Region | Choose your region and zones |
Node Labels | gitpod.io/workload_meta=true ,gitpod.io/workload_ide=true |
gcloud container node-pools \
create "workload-services" \
--cluster="${CLUSTER_NAME}" \
--disk-type="pd-ssd" \
--disk-size="100GB" \
--image-type="UBUNTU_CONTAINERD" \
--machine-type="n2d-standard-4" \
--num-nodes=1 \
--no-enable-autoupgrade \
--enable-autorepair \
--enable-autoscaling \
--metadata disable-legacy-endpoints=true \
--scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
--node-labels="gitpod.io/workload_meta=true,gitpod.io/workload_ide=true,gitpod.io/workload_workspace_services=true" \
--max-pods-per-node=110 \
--min-nodes=1 \
--max-nodes=4 \
--region="${REGION}"
We are also creating a node pool for the Gitpod regular workspaces.
Image Type | UBUNTU_CONTAINERD |
Machine Type | n2d-standard-16 |
Enable | Autoscaling, Autorepair, IP Alias, Network Policy |
Disable | Autoupgrademetadata=disable-legacy-endpoints=true |
Create Subnetwork | gitpod-${CLUSTER_NAME} |
Number of nodes | 1 |
Min Nodes | 1 |
Max Nodes | 50 |
Max Pods per Node | 110 |
Scopes | gke-default ,https://www.googleapis.com/auth/ndev.clouddns.readwrite |
Region | Choose your region and zones |
Node Labels | gitpod.io/workload_workspace_regular=true |
gcloud container node-pools \
create "workload-regular-workspaces" \
--cluster="${CLUSTER_NAME}" \
--disk-type="pd-ssd" \
--disk-size="512GB" \
--image-type="UBUNTU_CONTAINERD" \
--machine-type="n2d-standard-16" \
--num-nodes=1 \
--no-enable-autoupgrade \
--enable-autorepair \
--enable-autoscaling \
--metadata disable-legacy-endpoints=true \
--scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
--node-labels="gitpod.io/workload_workspace_regular=true" \
--max-pods-per-node=110 \
--min-nodes=1 \
--max-nodes=50 \
--region="${REGION}"
We are also creating a node pool for the Gitpod headless workspaces.
Image Type | UBUNTU_CONTAINERD |
Machine Type | n2d-standard-16 |
Enable | Autoscaling, Autorepair, IP Alias, Network Policy |
Disable | Autoupgrademetadata=disable-legacy-endpoints=true |
Create Subnetwork | gitpod-${CLUSTER_NAME} |
Number of nodes | 1 |
Min Nodes | 1 |
Max Nodes | 50 |
Max Pods per Node | 110 |
Scopes | gke-default ,https://www.googleapis.com/auth/ndev.clouddns.readwrite |
Region | Choose your region and zones |
Node Labels | gitpod.io/workload_workspace_headless=true |
gcloud container node-pools \
create "workload-headless-workspaces" \
--cluster="${CLUSTER_NAME}" \
--disk-type="pd-ssd" \
--disk-size="512GB" \
--image-type="UBUNTU_CONTAINERD" \
--machine-type="n2d-standard-16" \
--num-nodes=1 \
--no-enable-autoupgrade \
--enable-autorepair \
--enable-autoscaling \
--metadata disable-legacy-endpoints=true \
--scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
--node-labels="gitpod.io/workload_workspace_headless=true" \
--max-pods-per-node=110 \
--min-nodes=1 \
--max-nodes=50 \
--region="${REGION}"
Now, you can connect kubectl
to your newly created cluster.
gcloud container clusters get-credentials --region="${REGION}" "${CLUSTER_NAME}"
After that, you need to create cluster role bindings to allow the current user to create new RBAC rules.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="$(gcloud config get-value core/account)"