Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.
- Execution is contained on your own servers/infrastructure.
- The agent requires an internet connection but no inbound network access.
- Secrets can be stored on your own infrastructure.
Self-hosted agents are a business tier feature.
- Kubernetes cluster at version >= 1.16
- The agent will be installed using a Helm chart.
Use our repo k8s-modules which contains Terraform code for an easier cluster installation. You can use the main provider folder for a "full-blown" installation or a specific module to complete the requirements.
- The env0 agent will scale pods up and down according to deployment usages. Please notice, the ability to scale cluster nodes up and down must be provided.
- A pod running a single deployment requires
memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
- Minimum node requirements: an instance with at least 2 CPU and 8GiB memory
For the EKS cluster, you can use this TF example
- env0 will store local Terraform state and deployment artifacts, on a persistent volume in the cluster.
- Must support Dynamic Provisioning and ReadWriteMany access mode.
- The requested storage space is
- The cluster must include a
- The StorageClass should be set up with
reclaimPolicy: Retain, so in case the agent needs to be replaced or uninstalled, data won't be lost.
We recommend the current implementations for the major cloud providers:
- As Self Hosted agents allow you to store secrets on your own infrastructure, using secrets stored in the env0 platform is not allowed for self hosted agents.
- If you are migrating from the SaaS to a self hosted agent, deployments attempting to use these secrets will fail.
- This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
- In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
- Storing secrets is supported using these secret stores:
Secret reference format
AWS Secrets Manager (us-east-1)
Set by the
GCP Secrets Manager
Your GCP project's default region
Azure Key Vault
Your Azure subscription's default region
AWS regions in the U.S, Europe, Singapore, and Australia
values.yml will be provided by env0 with the configuration env0 provides.
The customer will need to provide a
values.customer.yml with the following values (optional), to enable specific features :
Required for feature
Custom Docker image URI and
Custom Docker image. See Using a custom image in an agent
Base64 encoded Infracost API key
Base64 encoded AWS Access Key ID & Secret
AWS Assume role for deploy credentials. Also see Authenticating the agent on AWS EKS
Container resource limits
Custom deployment pod size
Container resource requests
Custom deployment container resources
An array of
allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node
Custom node affinity
Base64 encoded AWS Access Key ID & Secret. Requires the
Using AWS Secrets Manager to store secrets for the agent
Base64 encoded GCP project name and JSON service-key contents. Requires the
Using GCP Secret Manager to store secrets for the agent
Base64 encoded Azure Credentials.
Using Azure Key Vault Secrets to store secrets for the agent
Base64 encoded HCP Vault token, and the cluster's url (also base64 encoded)
Using HCP Vault to store secrets for the agent
Base64 Bitbucket server credentials in the format
On premise Bitbucket Server installation.
Base64 Gitlab Enterprise credentials in the form of a Personal Access token.
On premise Gitlab Enterprise installation
Github Enterprise Integration (see step 3)
On premise Github Enterprise installation
When set, cloning a git repository will only be permitted if the git url matches the regular expression set.
VCS URL Whitelisting
Ability to change the default PVC storage class name for env0 self-hosted agent
the default is
Please pay attention, when you change this - you should also change your storage class name to match this configuration
Customize the Kubernetes service account used by the deployment pod. Primarily for pod level IAM permissions.
the default is
How many successful and failed deployment jobs should be kept in the kubernetes cluster history.
The default is 50 for each one.
We do our best to support all common configuration case-scenarios, but sometimes a more exotic or pre-released configuration is required.
Add our Helm Repo
helm repo add env0 https://env0.github.io/self-hosted
Update Helm Repo
helm repo update
Download the configuration file:
<your_agent_key>_values.yamlfrom Organization Settings -> Agents tab
- Install the Helm Charts
helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml # values.customer.yaml should contain any optional configuration options as detailed above
Example for helm install
helm upgrade env0-agent env0/env0-agent --namespace env0-agent
You were previously requested to download the values.yaml file, which is not required anymore for an upgrade.
Custom Agent Docker Image
If you extended the docker image on the agent, you should update the agent version in your custom image as well.
After installing a new version of the env0 agent helm chart is is highly recommended to verify the installation by running:
helm test env0-agent --namespace env0-agent --logs --timeout 1m
The agent needs the following outbound domains access:
env0 SaaS platform, the agent needs to communicate with the SaaS platform.
Github Docker registry which holds the Docker container of the agent.
Downloading Terraform binaries
Downloading public modules from the Terraform Registry
github.com, gitlab.com, bitbucket.org
Git VCS providers ( ports 22, 9418, 80, 443 )
Cost estimation by Infracost
- Make sure to allow access to your cloud providers, VCS domains, as well as any other tool that creates an outbound request.
Note that if you have the cluster behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and its corresponding Public IP.
Updated 6 days ago
For more advanced use cases for the self hosted agent see: