Manage Cluster Nodes

⚠️ Self-hosted as a product is no longer supported

The last official update of this product is the November 2022 self-hosted release. We no longer sell commercial self-hosted licenses. If you want to self-host Gitpod, you can still request our free community license. However, we no longer offer support or updates for it. If you are interested in an isolated, private installation of Gitpod, take a look at Gitpod Dedicated. Read our blog on Gitpod Dedicated to learn why we made the decision to discontinue self-hosted.

Sometimes nodes become unhealthy, or you need to prevent the autoscaler from removing the node from your cluster.

Avoiding Node Scale-down

If you wish to cordon a node with terminating workspaces, or, keep a node so you have time to manually backup user data:

language icon language: 
bash
# reference: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-prevent-cluster-autoscaler-from-scaling-down-a-particular-node
$ kubectl annotate node <nodename> cluster-autoscaler.kubernetes.io/scale-down-disabled=true

Handling Unhealthy Nodes

Prevent new workspaces from being scheduled to a node if they become unhealthy:

language icon language: 
bash
$ kubectl cordon <nodename>

Was this helpful?