Heads up! On October 1, we introduced Gitpod Flex. You can swap between documentation by using the switcher in the left navigation bar.

Reserving AWS Instances to Save Cost

To reduce infrastructure cost, it is best practice to reserve capacity on AWS for known, long-running and predictable workloads. This also applies to Gitpod. This guide gives recommendations on which reservations to make initially.

The requirements around machine types for Enterprise in different scenarios are as follows:

Instance requirements without load (0 workspaces):

Enterprise instances can be configured to scale up and down workspace instances on an hourly basis. This helps speed up workspace startup times because workspaces do not need to wait for a node to start. However, this may cause workspace instances to be running even when there are no workspaces present. Please contact your Gitpod Account Manager for more information.

  • For supporting services, dashboard, etc. Gitpod requires:
    • 4 m6i.xlarge
    • 1 m6i.2xlarge
  • When no workspaces are running, Gitpod requires:
    • 0 c6id.8xlarge / c5d.9xlarge (the node group is scaled to 0; type depends on region)

Instance requirements with load:

  • For supporting services, dashboard, etc:

    • 4 m6i.xlarge
    • 1 m6i.2xlarge
  • For workspaces:

    The main driver of Gitpod infrastructure cost are the machines used to run workspaces. However, these machines are scaled to 0 when no workspaces are running. Before making reservations here, it is best to first observe the real world usage of this machine type within the first few weeks and only then make reservations if deemed cost-effective. Reservations also apply for when instances are not running, so the cost savings from the reservations needs to outweigh the cost of the reservations when instances are not running (i.e. likely outside of work hours). This is because reserved instances are billed on an hourly basis and are reserved for every hour of the reservation period (e.g. 1year). See AWS docs.

    • xc6id.8xlarge / c5d.9xlarge (depends on region)

      • Example calculation for x: 20 developers, each using large workspaces, with on average one workspace during work hours each: (20 devs * 1 workspace) / 4 workspaces per instance → 5 instances during working hours.
      Maximum number of workspaces per node (subject to change)
      • 1 workspace instance using a class XXLarge (30 cores/54GiB RAM)
      • 2 workspace instances using a class XLarge (14 cores/30GiB RAM)
      • 4 workspace instances using a class Large (7 cores/16GiB RAM)
      • 7 workspace instances using a class Standard (4 cores/8GiB RAM)

Given the above, the initial recommended reservations are as follows:

  • 4 m6i.xlarge
  • 1 m6i.2xlarge
  • 0 c6id.8xlarge / c5d.9xlarge until data is available to make an informed reservation - see above.

Was this helpful?