Self Hosted Kubernetes Agent

Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.

  • Execution is contained within your own servers/infrastructure
  • The agent requires an internet connection but no inbound network access.
  • Secrets can be stored on your own infrastructure.
🚧

Feature Availability

Self-hosted agents are only available to Enterprise level customers. Click here for more details

Requirements

📘

Cluster Installation

The Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.

Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.

Autoscaler (recommended, but optional)

  • While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env0 usage. Otherwise, your deployment concurrency will be limited by the cluster's capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
  • The env0 agent will create a new pod for each deployment you run on env0. Pods are ephemeral and will be destroyed after a single deployment.
  • A pod running a single deployment requires at least cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
  • Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.

For the EKS cluster, you can use this TF example.

Persistent Volume/Storage Class (optional)

  • env0 will store the deployment state and working directory on a persistent volume in the cluster.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The cluster must include a StorageClass named env0-state-sc.
  • The Storage Class should be set up with reclaimPolicy: Retain, to prevent data loss in case the agent needs to be replaced or uninstalled.

We recommend the current implementations for the major cloud providers:

Cloud

Solution

AWS

EFS CSI

For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass

GCP

Filestore, OpenSource NFS

Azure

Azure Files

📘

PVC Alternative

By default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env0-Hosted Encrypted State with env0StateEncryptionKey.

Sensitive Secrets

  • Using secrets stored on the env0 platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
  • Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
  • If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
  • Storing secrets is supported using these secret stores:

Secret store

Secret reference format

Secret Region & Permissions

AWS Secrets Manager (us-east-1)

${ssm:<secret-name>}

Set by the awsSecretsRegion helm value. Defaults to us-east-1 Role must have permissions: secretsmanager:GetSecretValue

GCP Secrets Manager

${gcp:<secret-id>}

Your GCP project's default region

Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission 'secrets.versions.access' is required

Azure Key Vault

${azure:<secret-name>@<vault-name>}

Your Azure subscription's default region

HashiCorp Vault

${vault:<path>.<key>@<namespace>} where @<namespace> is optional

OCI Vault Secrets

${oci:<secret-id>}

The region defined in the credentials provided in the agent configuration.

📘

Allow storing secrets in env0

Alternatively, you could explicitly allow env0 to store secrets on its platform, by opting-in in your organization's policy - For more info read here

Internal Values

The following secrets are required for the agent components to communicate with env0's backend, they are generated and supplied in your values file.

  • awsAccessKeyIdEncoded
  • awsSecretAccessKeyEncoded
  • env0ApiGwKeyEncoded

Custom/Optional Configuration

A Helm values.yml will be provided by env0 with the configuration env0 provides. The customer will need to provide a values.customer.yml with the following values (optional), to enable specific features:

Keys

Description

Required for feature

Can be provided via a Kubernetes Secret?

dockerImage agentImagePullSecret

Custom Docker image URI and Base64 encoded .dockerconfigjson contents

Custom Docker image. See Using a custom image in an agent

No

agentImagePullSecretRef

A reference to a k8s secret name that holds a Docker pull image token

Custom Docker image that is hosted on a private Docker registry

No

infracostApiKeyEncoded

Base64 encoded Infracost API key

Cost Estimation

Yes: INFRACOST_API_KEY

assumerKeyIdEncoded assumerSecretEncoded

Base64 encoded AWS Access Key ID & Secret

AWS Assume role for deploy credentials. Also, see Authenticating the agent on AWS EKS

Yes: ASSUMER_ACCESS_KEY_ID ASSUMER_SECRET_ACCESS_KEY

limits.cpu limits.memory

Container resource limits Read more about resource ​allocation Recommended cpu: 1.5 and memory: 3Gi

Custom deployment pod size

No

requests.cpu requests.memory

Container resource requests. Recommended cpu: 1.5 and memory: 3Gi

Custom deployment container resources

No

tolerations

An array of toleration objects to apply to all pods. see docs

Custom tolerations

No

deploymentTolerations

An array of tolerationobjects - targeting the deployment pods. This will override the default tolerations for deployment pods.

Custom tolerations

No

affinity

Allows you to constrain which nodes env0 pods are eligible to be scheduled on. see docs

Custom node affinity

No

deploymentAffinity

Affinity for deployment pods. This will override the default affinity for deployment pods.

Custom node affinity

No

customerAwsAccessKeyIdEncoded customerAwsSecretAccessKeyEncoded awsSecretsRegion

Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permission

Using AWS Secrets Manager to store secrets for the agent

Yes: CUSTOMER_AWS_ACCESS_KEY_ID CUSTOMER_AWS_SECRET_ACCESS_KEY

customerGoogleProject customerGoogleCredentials

Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role.

Using GCP Secret Manager to store secrets for the agent. These credentials are not used for the deployment itself. If deploymentJobServiceAccountName is set - Workload identity will override any supplied credentials.

Yes: CUSTOMER_GOOGLE_PROJECT CUSTOMER_GOOGLE_CREDENTIALS

customerAzureClientId customerAzureClientSecret customerAzureTenantId

Base64 encoded Azure Credentials.

Using Azure Key Vault Secrets to store secrets for the agent

Yes: CUSTOMER_AZURE_CLIENT_ID CUSTOMER_AZURE_CLIENT_SECRET CUSTOMER_AZURE_TENANT_ID

customerOracleCredentials.tenancyOCIDEncoded customerOracleCredentials.userOCIDEncoded customerOracleCredentials.apiKeyFingerprintEncoded customerOracleCredentials.apiKeyPrivateKeyEncoded customerOracleCredentials.secretsRegion

OCI credentials. All should be defined separately within the same object - customerOracleCredentials.

Any field that ends with Encoded should be Base64 encoded.

Using OCI Vault to store secrets for the agent. These credentials are not used for the deployment itself.

Yes: CUSTOMER_ORACLE_TENANCY_OCID CUSTOMER_ORACLE_USER_OCID CUSTOMER_ORACLE_API_KEY_FINGERPRINT CUSTOMER_ORACLE_API_KEY_PRIVATE_KEY ORACLE_SECRETS_REGION

customerVaultTokenEncoded customerVaultUrl

  • (Deprecated)* Base64 encoded HCP Vault token, and the Vault's URL (also base64 encoded)

Using HCP Vault to store secrets for the agent

Yes: CUSTOMER_VAULT_TOKEN CUSTOMER_VAULT_ADDRESS

vault

Set HCP Vault authentication. First, set the cluster's URL:

address: (equivalent to VAULT_ADDR) loginPath: "<LoginCustomPath>" - (Optional)

Then, you should choose one of the following login method:

  • By Vault token method: "token" encodedToken: "<vault-token>"

  • By Username & Password method: "password" username: "<username>" encodedPassword: "<base64Encoded password>"

  • By Certificate method: "certificate" role: "<certificate name>" clientCertificateSecretName: "<secret-name>"

      • The secret must created in the same namespace with specify keys (client-cert, client-key) for example: kubectl create secret generic <secret-name> -n env0-agent --from-file=client-key=client.key --from-file=client-cert=client.crt passphraseSecretName: "<secret-name>" (Optional)
      • The secret must created in the same namespace with specify keys (client-cert, client-key) for example: kubectl create secret generic <secret-name> -n env0-agent --from-literal=passphrase=my-password
        • For Certification Authority (CA) you can choose between: ^ caDisable: true ^ caCertificateSecretName: "<secret-name>"with the key - ca-cert ^ Using CA certificate with customCertificates
  • By Role & Service Account Token(JWT) method: "service_account" role: "<vault role name>"

      • The JWT is supposed to be supplied by kubernetes in the following path: /var/run/secrets/kubernetes.io/serviceaccount/token\ You can read more about service accounts here. Communication is based on v1 HTTP API

Using HCP Vault to store secrets for the agent

No

bitbucketServerCredentialsEncoded

Base64 Bitbucket server credentials in the format `username:token (using a Personal Access token)

On-premise Bitbucket Server installation

Yes: BITBUCKET_SERVER_CREDENTIALS

gitlabEnterpriseCredentialsEncoded

Base64 Gitlab Enterprise credentials in the form of a Personal Access token

On-premise Gitlab Enterprise installation

Yes: GITLAB_ENTERPRISE_CREDENTIALS

gitlabEnterpriseBaseUrlSuffix

In cases where your GitLab instance base url is not at the root of the url, URL but on a separate path, e.g.,https://gitlab.acme.com/prod you should define that added suffix to this value gitlabEnterpriseBaseUrlSuffix=prod

On-premise Gitlab Enterprise installation

No

githubEnterpriseAppId githubEnterpriseAppInstallationId githubEnterpriseAppPrivateKeyEncoded

GitHub Enterprise Integration (see step 3)

On-premise GitHub Enterprise installation

Yes: GITHUB_ENTERPRISE_APP_PRIVATE_KEY

allowedVcsUrlRegex

When set, cloning a git repository will only be permitted if the git url matches the regular expression set.

VCS URL Whitelisting

No

customCertificates

An array of strings. Each represents a name of Kubernetes secret that contains custom CA certificates. Those certificates will be available during deployments.

Custom CA Certificates. More details here

No

gitSslNoVerify

When set to true, cloning a git repo will not verify SSL/TLS certs

Ignoring SSL/TLS certs for on-premise git servers

No

storageClassName

Ability to change the default PVC storage class name for env0 self-hosted agent

the default isenv0-state-sc

Please note: When changing this you should also change your storage class name to match this configuration

No

deploymentJobServiceAccountName

Customize the Kubernetes service account used by the deployment pod. Primarily for pod-level IAM permissions

the default is default

No

jobHistoryLimitFailure jobHistoryLimitSuccess

The number of successful and failed deployment jobs should be kept in the Kubernetes cluster history

The default is 10 for each value

No

strictSecurityContext

When set to true, the pod operates under node user instead of root

Increased agent pod security

No

env0StateEncryptionKey

A base64 encoded string (password). When set, deployment state and working directory will be encrypted and persisted on Env0's end

Env0-Hosted Encrypted State

Yes: ENV0_STATE_ENCRYPTION_KEY

environmentOutputEncryptionKey

A base64 encoded string (password). Used to enable the "Environment Outputs" feature

See notes in Environment Outputs

No

logger

Logger config level - debug/info/warn/error format - json/cli

No

agentImagePullPolicy

Set imagePullPolicy attribute - Always/Never/IfNotPresent

No

agentProxy

Agent's Proxy pod config: install - true/false replicas - how many replicas of the agent proxy to use. Default is 1 maxConcurrentRequests - how many concurrent requests each pod should handle. Default is 500 limits - k8s (cpu and memory) limits. Default is 250m and 500Mi

No

deploymentPodWarmPoolSize

A number of deployment pods that should be left "warm" (running & idle) and ready for new deployments

No

podAdditionalEnvVars

Additional Environment variables to be passed to the deployment pods, which will also be passed to the deployment process. These are set as a plain yaml object, i.e.: "podAdditionalEnvVars": ··"MY_SECRET": akeyless:/K8s/my_k8s_secret"

No

podAdditionalLabels

Additional labels to be set on deployment pods. Those are set as a plain yaml object, i.e.: "podAdditionalLabels": ··"mykey": "myvalue"

No

podAdditionalAnnotations

Additional annotations to be set on deployment pods. These are set as a plain yaml object, i.e.: "podAdditionalAnnotations": ··"mykey": "myvalue"

No

agentAdditionalEnvVars

Additional Environment variables to be passed to the agent pods, which will also be passed to the agent proxy / trigger. These are set as a plain yaml object, i.e.: "agentAdditionalEnvVars": ··"MY_ENV": "MY_VALUE"

No

agentAdditionalLabels

Additional annotations to be set on agent (trigger/proxy) pods. These are set as a plain yaml object, i.e.: "agentAdditionalLabels": ··"mykey": "myvalue"

No

agentAdditionalAnnotations

Additional annotations to be set on agent (trigger/proxy) pods. Those are set plain yaml object i.e:

"agentAdditionalAnnotations": ··"mykey": "myvalue"

No

customSecrets

customSecrets: - envVarName: MY_SECRET ··secretName: my-secrets-1 ··key: db_password

Mount custom secrets

No

customSecretMounts

customSecretMounts: - volumeName: my-secrets-1 ··secretName: my-secrets-1 ··mountPath: /opt/secret1

Mount secret files to given a mountPath

No

env0ConfigSecretName

A Kubernetes Secret name. Can be used to provide sensitive values listed in this table, instead of providing them in the Helm values files.

No

customRoleForOidcAwsSsm.duration customRoleForOidcAwsSsm.arn

Custom role for AWS SSM secret fetching, Note: only used when useOidcForAwsSsm=true

No

useOidcForAwsSsm

When set totrue the agent will be instructed to authenticate to AWS SSM using env0 OIDC.

No

httpProxy httpsProxy noProxy

Using Internal Proxy for communication with env0 or your self hosted VCS For enable proxy for http /httpsrequests use

httpProxy: "http://my-http-proxy-address" httpsProxy: "https://my-https-proxy-address"

If you want to exclude url from using the proxy use noProxy: "https://env0.com,https://my-other-vcs"

No

env0BlockDestroyAndTaskCommands

If set to "true", this agent will throw an error when any user tries to destroy any environment or run a custom "task" in it

No

additionalPodConfig

Additional configurations to be merged with the spec of deployment pods. Can be used to add hostAliases, dnsConfig, etc.

Example: additionalPodConfig: hostAliases: - ip: "192.168.1.100" hostnames: - "example.internal" - "example.test" dnsConfig: nameservers: - 1.1.1.1 - 8.8.8.8

additionalAgentConfig

Additional configurations to be merged with the spec of agent (trigger/proxy) pods. Can be used to add hostAliases, dnsConfig, etc.

Example: additionalAgentConfig: hostAliases: - ip: "192.168.1.100" hostnames: - "example.internal" - "example.test" dnsConfig: nameservers: - 1.1.1.1 - 8.8.8.8

📘

Base64 Encoding Values

To ensure no additional new line characters are being encoded, please use the following command in your terminal: echo -n $VALUE | base64

📘

Storing Secret Values as Kubernetes Secret

Some of the configuration values listed above are sensitive. As an alternative to setting them in your values.customer.yml, you can provide sensitive keys and values from your own Kubernetes Secret prior to the agent's installation/upgrade.

Setting the env0ConfigSecretName will instruct the agent to extract the needed values from the given Kubernetes Secret and will override values from the Helm value files.

The Kubernetes Secret must be accessible from the agent, and sensitive values must be Base64 encoded.

Further Configuration

The env0 agent externalizes a wide array of values that may be set to configure the agent.

We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required.

For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.

Job Limits

You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter on count/jobs.batch.

See here for more details.

Installation

  1. Add our Helm Repo

    helm repo add env0 https://env0.github.io/self-hosted
  2. Update Helm Repo

    helm repo update
  3. Download the configuration file: <your_agent_key>_values.yaml from Organization Settings -> Agents tab

  1. Install the Helm Charts
    helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml
    # values.customer.yaml should contain any optional configuration options as detailed above
📘

TF example

Example for helm install

Helm provider must be greater >= 2.5.0

📘

Installing from source

If you decide not to install the helm chart from our helm repo, and you want to install using the source code (for example by using git clone) you might need to run: helm dependency build <path-to-the-source-code>

Upgrade

helm upgrade env0-agent env0/env0-agent --namespace env0-agent
📘

Upgrade Changes

Previously, you would have had to download the values.yaml file. This is no longer required for an upgrade. However, we do recommend keeping the version of the values.yaml file you used to install the agent with, in case a rollback is required during the upgrade progress.

🚧

Custom Agent Docker Image

If you extended the docker image on the agent, you should update the agent version in your custom image as well.

Verify Installation/Upgrade

After installing a new version of the env0 agent helm chart, it is highly recommended to verify the installation by running:

helm test env0-agent --namespace env0-agent --logs --timeout 1m

Using the helm template command

Alternatively to using helm to install the agent directly, you could use helm template in order to generate the K8S yaml files for you. Then you'd be able to run these files with a different K8S pipeline, like running kubectl apply or using ArgoCD.

In order to generate the yaml files using helm template, you should first add the env0 helm chart

helm repo add env0 https://env0.github.io/self-hosted
helm repo update

Then, run the following command.

If your Kubernetes cluster is version 1.21 and up:

helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> --api-version=batch/v1/CronJob -n <MY_NAMESPACE> -f values.yaml

If your Kubernetes cluster version is less than 1.21:

helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> -n <MY_NAMESPACE> -f values.yaml
  • <KUBERNETES_VERSION> is the version of your kubernetes cluster
  • <MY_NAMESPACE> is the k8s namespace in which the agent will be installed
  • values.yaml is the values file downloaded from env0's Organization Settings -> Agents tab. You can also add your own custom values into said file.
🚧

Using env0ConfigSecretName with the helm template command

If using helm template, the feature that checks the Kubernetes secret defined by the env0ConfigSecretName Helm value to determine whether the PVC should be created will not function. This feature relies on an active connection to the cluster

Outbound Domains

The agent needs the following outbound domains access:

WildcardUsed by
*.env0.com, *.amazonaws.comenv0 SaaS platform, the agent needs to communicate with the SaaS platform
ghcr.ioGitHub Docker registry which holds the Docker container of the agent
*.hashicorp.comDownloading Terraform binaries
registry.terraform.ioDownloading public modules from the Terraform Registry
registry.opentofu.orgDownloading public modules from the OpenTofu Registry
github.com, gitlab.com, bitbucket.orgGit VCS providers ( ports 22, 9418, 80, 443 )
api.github.comTerragrunt installation
*.infracost.ioCost estimation by Infracost
openpolicyagent.orgInstalling Open Policy Agent, used for Approval Policies
  • Make sure to allow access to your cloud providers, VCS domains, and any other tool that creates an outbound request.
🚧

Firewall Rules

Note that if your cluster is behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and corresponding Public IP.