Self Hosted Kubernetes Agent

Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.

  • Execution is contained within your own servers/infrastructure
  • The agent requires an internet connection but no inbound network access.
  • Secrets can be stored on your own infrastructure.

🚧

Feature Availability

Self-hosted agents are only available to Business and Enterprise level customers. Click here for more details

Requirements

📘

Cluster Installation

The Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.

Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.

Autoscaler (recommended, but optional)

  • While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env0 usage. Otherwise, your deployment concurrency will be limited by the cluster's capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
  • The env0 agent will create a new pod for each deployment you run on env0.
    Pods are ephemeral and will be destroyed after a single deployment.
  • A pod running a single deployment requires at least cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
  • Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.

For the EKS cluster, you can use this TF example.

Persistent Volume/Storage Class (optional)

  • env0 will store the deployment state and working directory on a persistent volume in the cluster.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The cluster must include a StorageClass named env0-state-sc.
  • The Storage Class should be set up with reclaimPolicy: Retain, to prevent data loss in case the agent needs to be replaced or uninstalled.

We recommend the current implementations for the major cloud providers:

CloudSolution
AWSEFS CSI

For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass
GCPFilestore, OpenSource NFS
AzureAzure Files

📘

PVC Alternative

By default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env0-Hosted Encrypted State with env0StateEncryptionKey.

Sensitive Secrets

  • Using secrets stored on the env0 platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
  • Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
  • If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
  • Storing secrets is supported using these secret stores:
Secret storeSecret reference formatSecret Region & Permissions
AWS Secrets Manager (us-east-1)${ssm:<secret-name>}Set by the awsSecretsRegion helm value. Defaults to us-east-1
Role must have permissions: secretsmanager:GetSecretValue
GCP Secrets Manager${gcp:<secret-id>}Your GCP project's default region

Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission 'secrets.versions.access' is required
Azure Key Vault${azure:<secret-name>@<vault-name>}Your Azure subscription's default region
HashiCorp Vault${vault:<path>.<key>@<namespace>} where @<namespace> is optional

Custom/Optional Configuration

A Helm values.yml will be provided by env0 with the configuration env0 provides.
The customer will need to provide a values.customer.yml with the following values (optional), to enable specific features:

KeysDescriptionRequired for featureCan be provided via a Kuberentes Secret?
dockerImage
agentImagePullSecret
Custom Docker image URI and Base64 encoded .dockerconfigjson contentsCustom Docker image. See Using a custom image in an agentNo
agentImagePullSecretRefA reference to a k8s secret name that holds a Docker pull image tokenCustom Docker image that is hosted on a private Docker registryNo
infracostApiKeyEncodedBase64 encoded Infracost API keyCost EstimationYes:
INFRACOST_API_KEY
assumerKeyIdEncoded
assumerSecretEncoded
Base64 encoded AWS Access Key ID & SecretAWS Assume role for deploy credentials. Also, see Authenticating the agent on AWS EKSYes:
ASSUMER_ACCESS_KEY_ID
ASSUMER_SECRET_ACCESS_KEY
limits.cpu
limits.memory
Container resource limits
Read more about resource ​allocation
Recommended cpu: 1.5 and memory: 3Gi
Custom deployment pod sizeNo
requests.cpu
requests.memory
Container resource requests. Recommended cpu: 1.5 and memory: 3GiCustom deployment container resourcesNo
tolerationsAn array of toleration objects to apply to all pods. see docsCustom tolerationsNo
deploymentTolerationsAn array of tolerationobjects - targeting the deployment pods. This will override the default tolerations for deployment pods.Custom tolerationsNo
affinityAllows you to constrain which nodes env0 pods are eligible to be scheduled on. see docsCustom node affinityNo
deploymentAffinityAffinity for deployment pods. This will override the default affinity for deployment pods.Custom node affinityNo
customerAwsAccessKeyIdEncoded
customerAwsSecretAccessKeyEncoded awsSecretsRegion
Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permissionUsing AWS Secrets Manager to store secrets for the agentYes:
CUSTOMER_AWS_ACCESS_KEY_ID
CUSTOMER_AWS_SECRET_ACCESS_KEY
customerGoogleProject
customerGoogleCredentials
Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role.Using GCP Secret Manager to store secrets for the agent. These credentials are not used for the deployment itself. If deploymentJobServiceAccountName is set - Workload identity will override any supplied credentials.Yes:
CUSTOMER_GOOGLE_PROJECT
CUSTOMER_GOOGLE_CREDENTIALS
customerAzureClientId
customerAzureClientSecret
customerAzureTenantId
Base64 encoded Azure Credentials.Using Azure Key Vault Secrets to store secrets for the agentYes:
CUSTOMER_AZURE_CLIENT_ID
CUSTOMER_AZURE_CLIENT_SECRET
CUSTOMER_AZURE_TENANT_ID
customerVaultTokenEncoded
customerVaultUrl
(Deprecated) Base64 encoded HCP Vault token, and the Vault's URL (also base64 encoded)Using HCP Vault to store secrets for the agentYes:
CUSTOMER_VAULT_TOKEN
CUSTOMER_VAULT_ADDRESS
vault Set HCP Vault authentication.
First, set the cluster's URL:

address: (equivalent to VAULT_ADDR)

Then, you can choose one of the following authentication types:

- By VAULT_TOKEN
encodedToken:

- By Username & Password
username:
encodedPassword: (base64Encoded)

- By Role & Service Account Token(JWT)
role:
loginPath: default is kubernetes
The JWT is supposed to be supplied by kubernetes in the following path:
/var/run/secrets/kubernetes.io/serviceaccount/token
You can read more about service accounts here. Communication is based on v1 HTTP API
Using HCP Vault to store secrets for the agentNo
bitbucketServerCredentialsEncodedBase64 Bitbucket server credentials in the format `username:token (using a Personal Access token)On-premise Bitbucket Server installationYes:
BITBUCKET_SERVER_CREDENTIALS
gitlabEnterpriseCredentialsEncodedBase64 Gitlab Enterprise credentials in the form of a Personal Access tokenOn-premise Gitlab Enterprise installationYes:
GITLAB_ENTERPRISE_CREDENTIALS
gitlabEnterpriseBaseUrlSuffixIn cases where your GitLab instance base url is not at the root of the url, URL but on a separate path, e.g.,https://gitlab.acme.com/prod you should define that added suffix to this value
gitlabEnterpriseBaseUrlSuffix=prod
On-premise Gitlab Enterprise installationNo
githubEnterpriseAppId
githubEnterpriseAppClientId
githubEnterpriseAppInstallationId
githubEnterpriseAppClientSecretEncoded
githubEnterpriseAppPrivateKeyEncoded
GitHub Enterprise Integration (see step 3)On-premise GitHub Enterprise installationYes:
GITHUB_ENTERPRISE_APP_CLIENT_SECRET
GITHUB_ENTERPRISE_APP_PRIVATE_KEY
allowedVcsUrlRegexWhen set, cloning a git repository will only be permitted if the git url matches the regular expression set.VCS URL WhitelistingNo
customCertificates An array of strings. Each represents a name of Kubernetes secret that contains custom CA certificates. Those certificates will be available during deployments.Custom CA Certificates. More details hereNo
gitSslNoVerifyWhen set to true, cloning a git repo will not verify SSL/TLS certsIgnoring SSL/TLS certs for on-premise git serversNo
storageClassNameAbility to change the default PVC storage class name for env0 self-hosted agentthe default isenv0-state-sc

Please note: When changing this you should also change your storage class name to match this configuration
No
deploymentJobServiceAccountNameCustomize the Kubernetes service account used by the deployment pod. Primarily for pod-level IAM permissionsthe default is defaultNo
jobHistoryLimitFailure
jobHistoryLimitSuccess
The number of successful and failed deployment jobs should be kept in the Kubernetes cluster historyThe default is 10 for each valueNo
strictSecurityContextWhen set to true, the pod operates under node user instead of rootIncreased agent pod securityNo
env0StateEncryptionKeyA base64 encoded string (password). When set, deployment state and working directory will be encrypted and persisted on Env0's endEnv0-Hosted Encrypted StateYes:
ENV0_STATE_ENCRYPTION_KEY
environmentOutputEncryptionKeyA base64 encoded string (password). Used to enable the "Environment Outputs" featureNo
loggerLogger config
level - debug/info/warn/error
format - json/cli
No
agentImagePullPolicySet imagePullPolicy attribute - Always/Never/IfNotPresentNo
agentProxyAgent's Proxy pod config:
install - true/false
replicas - how many replicas of the agent proxy to use. Default is 1
maxConcurrentRequests - how many concurrent requests each pod should handle. Default is 500
limits - k8s (cpu and memory) limits. Default is 250m and 500Mi
No
deploymentPodWarmPoolSizeA number of deployment pods that should be left "warm" (running & idle) and ready for new deploymentsNo
podAdditionalEnvVarsAdditional Environment variables to be passed to the agent pods, which will also be passed to the deployment process. These are set as a plain yaml object, i.e.:
"podAdditionalEnvVars":
¡¡"MY_SECRET": akeyless:/K8s/my_k8s_secret"
No
podAdditionalLabelsAdditional labels to be set on deployment pods. Those are set as a plain yaml object, i.e.:
"podAdditionalLabels":
¡¡"mykey": "myvalue"
No
podAdditionalAnnotationsAdditional annotations to be set on deployment pods. These are set as a plain yaml object, i.e.:
"podAdditionalAnnotations":
¡¡"mykey": "myvalue"
No
agentAdditionalLabelsAdditional annotations to be set on agent (trigger/proxy) pods. These are set as a plain yaml object, i.e.:
"agentAdditionalLabels":
¡¡"mykey": "myvalue"
No
agentAdditionalAnnotationsAdditional annotations to be set on agent (trigger/proxy) pods. Those are set plain yaml object i.e:

"agentAdditionalAnnotations":
¡¡"mykey": "myvalue"
No
customSecretscustomSecrets:
- envVarName: MY_SECRET
¡¡secretName: my-secrets-1
¡¡key: db_password
Mount custom secretsNo
customSecretMountscustomSecretMounts:
- volumeName: my-secrets-1
¡¡secretName: my-secrets-1
¡¡mountPath: /opt/secret1
Mount secret files to given a mountPathNo
environmentOutputEncryptionKeyAn encryption key used to encrypt and decrypt Environments OutputsUse Environment OutputsNo
env0ConfigSecretNameA Kubernetes Secret name. Can be used to provide sensitive values listed in this table, instead of providing them in the Helm values files.No
customRoleForOidcAwsSsm.duration
customRoleForOidcAwsSsm.arn
Custom role for AWS SSM secret fetching, Note: only used when useOidcForAwsSsm=trueNo
useOidcForAwsSsmWhen set totrue the agent will be instructed to authenticate to AWS SSM using env0 OIDC.No

📘

Base64 Encoding Values

To ensure no additional new line characters are being encoded, please use the following command in your terminal:
echo -n $VALUE | base64

📘

Storing Secret Values as Kubernetes Secret

Some of the configuration values listed above are sensitive.
As an alternative to setting them in your values.customer.yml, you can provide sensitive keys and values from your own Kubernetes Secret prior to the agent's installation/upgrade.

Setting the env0ConfigSecretName will instruct the agent to extract the needed values from the given Kubernetes Secret and will override values from the Helm value files.

The Kubernetes Secret must be accessible from the agent, and sensitive values must be Base64 encoded.

Further Configuration

The env0 agent externalizes a wide array of values that may be set to configure the agent.

We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required.

For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.

Job Limits

You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter on count/jobs.batch See: https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.

Installation

  1. Add our Helm Repo

    helm repo add env0 https://env0.github.io/self-hosted
    
  2. Update Helm Repo

    helm repo update
    
  3. Download the configuration file: <your_agent_key>_values.yaml from Organization Settings -> Agents tab

  1. Install the Helm Charts
    helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml
    # values.customer.yaml should contain any optional configuration options as detailed above
    

🚧

Usage of helm template command

Please note that if you choose to render the Helm template first and apply it using kubectl (a common practice with ArgoCD), the feature that checks the Kubernetes secret defined by the env0ConfigSecretName Helm value to determine whether the PVC should be created will not function. This feature relies on an active connection to the cluster

📘

TF example

Example for helm install

Upgrade

helm upgrade env0-agent env0/env0-agent --namespace env0-agent

📘

Upgrade Changes

Previously, you would have had to download the values.yaml file. This is no longer required for an upgrade. However, we do recommend keeping the version of the values.yaml file you used to install the agent with, in case a rollback is required during the upgrade progress.

🚧

Custom Agent Docker Image

If you extended the docker image on the agent, you should update the agent version in your custom image as well.

Verify Installation/Upgrade

After installing a new version of the env0 agent helm chart, it is highly recommended to verify the installation by running:

helm test env0-agent --namespace env0-agent --logs --timeout 1m

Outbound Domains

The agent needs the following outbound domains access:

WildcardUsed by
*.env0.com, *.amazonaws.comenv0 SaaS platform, the agent needs to communicate with the SaaS platform
ghcr.ioGitHub Docker registry which holds the Docker container of the agent
*.hashicorp.comDownloading Terraform binaries
registry.terraform.ioDownloading public modules from the Terraform Registry
registry.opentofu.orgDownloading public modules from the OpenTofu Registry
github.com, gitlab.com, bitbucket.orgGit VCS providers ( ports 22, 9418, 80, 443 )
api.github.comTerragrunt installation
*.infracost.ioCost estimation by Infracost
openpolicyagent.orgInstalling Open Policy Agent, used for Approval Policies
  • Make sure to allow access to your cloud providers, VCS domains, and any other tool that creates an outbound request.

🚧

Firewall Rules

Note that if your cluster is behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and corresponding Public IP.