The heart of this reference architecture is a Kubernetes cluster where all Gitpod components are deployed to. This cluster consists of three node pools:

  1. Services Node Pool: The Gitpod “app” with all its services is deployed to these nodes. These services provide the users with the dashboard and manage the provisioning of workspaces.
  2. Regular Workspaces Node Pool: Gitpod deploys the actual workspaces (where the actual developer work is happening) to these nodes.
  3. Headless Workspace Node Pool: Gitpod deploys the imagebuild and prebuild workspaces (where build work generally demands more CPU and disk) to these needs.

Gitpod services, headless, and regular workspaces have vastly differing resource and isolation requirements. These workloads are separated onto different node pools to provide a better quality of service and security guarantees.

You need to assign the following labels to the node pools to enforce that the Gitpod components are scheduled to the proper node pools:

Node Pool Labels
Services Node Pool gitpod.io/workload_meta=true,
gitpod.io/workload_ide=true,
gitpod.io/workload_workspace_services=true
Regular Workspace Node Pool gitpod.io/workload_workspace_regular=true
Headless Workspace Node Pool gitpod.io/workload_workspace_headless=true

The following table gives an overview of the node types for the different cloud providers that are used by this reference architecture.

GCP AWS Azure
Services Node Pool n2d-standard-4 m6i.xlarge Standard_D4_v4
Regular Workspace Node Pool n2d-standard-16 m6i.4xlarge Standard_D16_v4
Headless Workspace Node Pool n2d-standard-16 m6i.4xlarge Standard_D16_v4
Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

First, we create a service account for the cluster. The service account needs to have the following roles:

Roles
roles/storage.admin
roles/logging.logWriter
roles/monitoring.metricWriter
roles/container.admin

Run the following commands to create the service account:

language icon language: 
bash
GKE_SA=gitpod-gke
GKE_SA_EMAIL="${GKE_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${GKE_SA}" --display-name "${GKE_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/storage.admin"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/logging.logWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/monitoring.metricWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/container.admin"

After that, we create a Kubernetes cluster.

Image Type UBUNTU_CONTAINERD
Machine Type e2-standard-2
Cluster Version Choose latest from regular channel
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Max Pods per Node 10
Default Max Pods per Node 110
Min Nodes 0
Max Nodes 1
Addons HorizontalPodAutoscaling,
NodeLocalDNS,
NetworkPolicy
Region Choose your region and zones
language icon language: 
bash
CLUSTER_NAME=gitpod
REGION=us-central1
GKE_VERSION=1.22.12-gke.1200

gcloud container clusters \
    create "${CLUSTER_NAME}" \
    --disk-type="pd-ssd" --disk-size="50GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="e2-standard-2" \
    --cluster-version="${GKE_VERSION}" \
    --region="${REGION}" \
    --service-account "${GKE_SA_EMAIL}" \
    --num-nodes=1 \
    --no-enable-basic-auth \
    --enable-autoscaling \
    --enable-autorepair \
    --no-enable-autoupgrade \
    --enable-ip-alias \
    --enable-network-policy \
    --create-subnetwork name="gitpod-${CLUSTER_NAME}" \
    --metadata=disable-legacy-endpoints=true \
    --max-pods-per-node=110 \
    --default-max-pods-per-node=110 \
    --min-nodes=0 \
    --max-nodes=1 \
    --addons=HorizontalPodAutoscaling,NodeLocalDNS,NetworkPolicy

Unfortunately, you cannot create a cluster without the default node pool. Since we need a custom node pool, you need to remove the default one.

language icon language: 
bash
gcloud --quiet container node-pools delete default-pool \
    --cluster="${CLUSTER_NAME}" --region="${REGION}"

Now, we are creating a node pool for the Gitpod services.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-4
Enable Autoscaling
Autorepair
IP Alias
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_meta=true,
gitpod.io/workload_ide=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-services" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="100GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-4" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_meta=true,gitpod.io/workload_ide=true,gitpod.io/workload_workspace_services=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=4 \
    --region="${REGION}"

We are also creating a node pool for the Gitpod regular workspaces.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-16
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_workspace_regular=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-regular-workspaces" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="512GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-16" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_workspace_regular=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=50 \
    --region="${REGION}"

We are also creating a node pool for the Gitpod headless workspaces.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-16
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_workspace_headless=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-headless-workspaces" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="512GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-16" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_workspace_headless=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=50 \
    --region="${REGION}"

Now, you can connect kubectl to your newly created cluster.

language icon language: 
bash
gcloud container clusters get-credentials --region="${REGION}" "${CLUSTER_NAME}"

After that, you need to create cluster role bindings to allow the current user to create new RBAC rules.

language icon language: 
bash
kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user="$(gcloud config get-value core/account)"

Was this helpful?