Self Hosted Kubernetes Agent

Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.

  • Execution is contained on your own servers/infrastructure.
  • The agent requires an internet connection but no inbound network access.
  • Secrets can be stored on your own infrastructure.

Self-hosted agents are a business tier feature.


  • Kubernetes cluster at version >= 1.16
  • The agent will be installed using a Helm chart.
  • Examples for the different requirements, per cloud provider, can be provided on demand, as well as the Terraform code for a "full blown" agent installation which includes a Kubernetes cluster running on AWS EKS - including all requirements.

Node Requirements

  • Node autoscaler - the env0 agent will scale pods up and down according to usage. The customer must provide the ability to scale cluster nodes up and down.
  • A pod running a single deployment requires cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.

Persistant Volume / Storage Class

  • env0 will store local Terraform state and deployment artifacts, on a persistent volume in the cluster.
  • The cluster must include a StorageClass named env0-state-sc.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The StorageClass should be set up with reclaimPolicy: Retain, so in case the agent needs to be replaced or uninstalled, data won't be lost.

We recommend the current implementations for the major cloud providers:








Azure Files

Sensitive Secrets

  • As Self Hosted agents allow you to store secrets on your own infrastructure, using secrets stored in the env0 platform is not allowed for self hosted agents.
  • If you are migrating from the SaaS to a self hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
  • Storing secrets is supported using these secret stores:

Secret store

Secret reference format

AWS Secrets Manager


GCP Secrets Manager


Azure Key Vault


Custom/Optional Configuration

A Helm values.env0.yml will be provided by env0 with the configuration env0 provides.
The customer will need to provide a values.customer.yml with the following values (optional), to enable specific features :



Required for feature


Custom Docker image URI and .dockerconfigjson contents

Customer Docker image


Base64 encoded Infracost API key

Cost Estimation


Base64 encoded AWS Access Key ID & Secret

AWS Assume role for deploy credentials


Container resource limits

Custom deployment pod size


An array of toleration objects to apply to the deployment containers.

Custom tolerations


Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permission.

Using AWS Secrets Manager to store secrets for the agent


Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role.

Using GCP Secret Manager to store secrets for the agent


Base64 encoded Azure Credentials.

Using Azure Key Vault Secrets to store secrets for the agent


Base64 Bitbucket server credentials in the format username:token (using a Personal Access token).

On premise Bitbucket Server installation.


  1. Add our Helm Repo

    helm repo add env0
  2. Update Helm Repo

    helm repo update
  3. Install the Helm Charts

    helm install env0-keda  env0/keda       --namespace env0-keda --create-namespace
    helm install env0-agent env0/env0-agent --namespace env0-keda -f values.yml -f values.customer.yml
    # values.yml will be provided by env0, per installation
    # values.customer.yml should contain any optional configuration options as detailed above

Did this page help you?