Self Hosted Kubernetes Agent

Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.

  • Execution is contained on your own servers/infrastructure.
  • The agent requires an internet connection but no inbound network access.
  • Secrets can be stored on your own infrastructure.

🚧

Feature Availability

Self-hosted agents are only available to Business and Enterprise level customers. Click here for more details

Requirements

📘

Installation Tip

Use our repo k8s-modules which contains Terraform code for an easier cluster installation. You can use the main provider folder for a "full-blown" installation or a specific module to complete the requirements.

Autoscaler (Recommended, but optional)

  • While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env0 usage. Otherwise, your deployment concurrency will be limited by the cluster's capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
  • The env0 agent will scale pods up and down according to deployment usage.
  • A pod running a single deployment requires at least cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
  • Minimum node requirements: an instance with at least 2 CPU and 8GiB memory

For the EKS cluster, you can use this TF example

Persistent Volume / Storage Class (optional)

  • env0 will store the deployment state and working directory, on a persistent volume in the cluster.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The cluster must include a StorageClass named env0-state-sc.
  • The StorageClass should be set up with reclaimPolicy: Retain, so in case the agent needs to be replaced or uninstalled, data won't be lost.

We recommend the current implementations for the major cloud providers:

CloudSolution
AWSEFS CSI

For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass
GCPFilestore, OpenSource NFS
AzureAzure Files

📘

PVC Alternative

By default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. In the scenario where PV creation or management is difficult, or not required, you can use env0-Hosted Encrypted State with env0StateEncryptionKey.

Sensitive Secrets

  • As Self Hosted agents allow you to store secrets on your own infrastructure, using secrets stored in the env0 platform is not allowed for self-hosted agents.
  • Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below
  • If you are migrating from the SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
  • Storing secrets is supported using these secret stores:
Secret storeSecret reference formatSecret Region & Permissions
AWS Secrets Manager (us-east-1)${ssm:<secret-name>}Set by the awsSecretsRegion helm value. Defaults to us-east-1
Role must have permissions: secretsmanager:GetSecretValue
GCP Secrets Manager${gcp:<secret-id>}Your GCP project's default region

Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission 'secrets.versions.access' is required.
Azure Key Vault${azure:<secret-name>@<vault-name>}Your Azure subscription's default region
HashiCorp Vault${vault:<path>.<key>@<namespace>} where @<namespace> is optional

Custom/Optional Configuration

A Helm values.yml will be provided by env0 with the configuration env0 provides.
The customer will need to provide a values.customer.yml with the following values (optional), to enable specific features:

KeysDescriptionRequired for featureCan be provided via a Kuberentes Secret?
dockerImage
agentImagePullSecret
Custom Docker image URI and Base64 encoded .dockerconfigjson contentsCustom Docker image. See Using a custom image in an agentNo
infracostApiKeyEncodedBase64 encoded Infracost API keyCost EstimationYes:
INFRACOST_API_KEY
assumerKeyIdEncoded
assumerSecretEncoded
Base64 encoded AWS Access Key ID & SecretAWS Assume role for deploy credentials. Also, see Authenticating the agent on AWS EKSYes:
ASSUMER_ACCESS_KEY_ID
ASSUMER_SECRET_ACCESS_KEY
limits.cpu
limits.memory
Container resource limits
Read more about resource ​allocation
Recommended cpu: 1.5 and memory: 3Gi
Custom deployment pod sizeNo
requests.cpu
requests.memory
Container resource requests. Recommended cpu: 1.5 and memory: 3GiCustom deployment container resourcesNo
tolerationsAn array of toleration objects to apply to the deployment containers.Custom tolerationsNo
affinityAllows you to constrain which nodes env0 pods are eligible to be scheduled on see [docsCustom node affinityNo
deploymentAffinityAffinity for deployment pods. This will override the default affinity for deployment pods.Custom node affinityNo
customerAwsAccessKeyIdEncoded
customerAwsSecretAccessKeyEncoded awsSecretsRegion
Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permission.Using AWS Secrets Manager to store secrets for the agentYes:
CUSTOMER_AWS_ACCESS_KEY_ID
CUSTOMER_AWS_SECRET_ACCESS_KEY
customerGoogleProject
customerGoogleCredentials
Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role.Using GCP Secret Manager to store secrets for the agent. These credentials are not used for the deployment itself. If deploymentJobServiceAccountName is set - Workload identity will override any supplied credentials.Yes:
CUSTOMER_GOOGLE_PROJECT
CUSTOMER_GOOGLE_CREDENTIALS
customerAzureClientId
customerAzureClientSecret
customerAzureTenantId
Base64 encoded Azure Credentials.Using Azure Key Vault Secrets to store secrets for the agentYes:
CUSTOMER_AZURE_CLIENT_ID
CUSTOMER_AZURE_CLIENT_SECRET
CUSTOMER_AZURE_TENANT_ID
customerVaultTokenEncoded
customerVaultUrl
(Deprecated) Base64 encoded HCP Vault token, and the cluster's URL (also base64 encoded)Using HCP Vault to store secrets for the agentYes:
CUSTOMER_VAULT_TOKEN
CUSTOMER_VAULT_ADDRESS
vault Set HCP Vault authentication.
First, set the cluster's URL:

address: (equivalent to VAULT_ADDR)

Then, you can choose one of the following authentication types:

- By VAULT_TOKEN
encodedToken:

- By Username & Password
username:
encodedPassword: (base64Encoded)

- By Role & Service Account Token(JWT)
role:
loginPath: default is kubernetes
The JWT is supposed to be supplied by kubernetes in the following path:
/var/run/secrets/kubernetes.io/serviceaccount/token
You can read more about service accounts hereCommunication is based on v1 HTTP API
Using HCP Vault to store secrets for the agentNo
bitbucketServerCredentialsEncodedBase64 Bitbucket server credentials in the format username:token (using a Personal Access token).On-premise Bitbucket Server installation.Yes:
BITBUCKET_SERVER_CREDENTIALS
gitlabEnterpriseCredentialsEncodedBase64 Gitlab Enterprise credentials in the form of a Personal Access token.On-premise Gitlab Enterprise installationYes:
GITLAB_ENTERPRISE_CREDENTIALS
gitlabEnterpriseBaseUrlSuffixIn cases where your GitLab instance base url is not at the root of the url, and in a separate path, e.g https://gitlab.acme.com/prod you should define that added suffix to this value
gitlabEnterpriseBaseUrlSuffix=prod
On-premise Gitlab Enterprise installationNo
githubEnterpriseAppId
githubEnterpriseAppClientId
githubEnterpriseAppInstallationId
githubEnterpriseAppClientSecretEncoded
githubEnterpriseAppPrivateKeyEncoded
Github Enterprise Integration (see step 3)On-premise GitHub Enterprise installationYes:
GITHUB_ENTERPRISE_APP_CLIENT_SECRET
GITHUB_ENTERPRISE_APP_PRIVATE_KEY
allowedVcsUrlRegexWhen set, cloning a git repository will only be permitted if the git url matches the regular expression set.VCS URL WhitelistingNo
customCertificates An array of strings. Each represents a name of Kubernetes secret that contains custom CA certificates. Those certificates will be available during deployments.Custom CA Certificates. More details here.No
gitSslNoVerifyWhen set to true, cloning a git repo will not verify SSL/TLS certsIgnoring SSL/TLS certs for on-premise git servers.No
storageClassNameAbility to change the default PVC storage class name for env0 self-hosted agentthe default is env0-state-sc

Please pay attention, when you change this - you should also change your storage class name to match this configuration
No
deploymentJobServiceAccountNameCustomize the Kubernetes service account used by the deployment pod. Primarily for pod-level IAM permissions.the default is defaultNo
jobHistoryLimitFailure
jobHistoryLimitSuccess
How many successful and failed deployment jobs should be kept in the Kubernetes cluster history.The default is 10 for each one.No
strictSecurityContextWhen set to true, the pod operates under node user instead of root.Increased agent pod securityNo
env0StateEncryptionKeyA base64 encoded string (password). When set, deployment state and working directory will be encrypted and persisted on Env0's end.Env0-Hosted Encrypted StateYes:
ENV0_STATE_ENCRYPTION_KEY
environmentOutputEncryptionKeyA base64 encoded string (password). Used to enable "Environment Outputs" feature.No
loggerLogger config
level - debug/info/warn/error
format - json/cli
No
agentImagePullPolicySet imagePullPolicy attribute - Always/Never/IfNotPresentNo
agentProxyAgent's Proxy pod config:
install - true/false
limits - k8s (cpu & memory) limits - default is 250m and 500Mi
No
deploymentPodWarmPoolSizeA number of deployment pods that should be left "warm" (running & idle) and ready for new deployments.No
podAdditionalEnvVarsAdditional Environment variables to be passed to the agent pods, which will also be passed to the deployment process. Those are set plain yaml object i.e:

"podAdditionalEnvVars":
··"MY_SECRET": akeyless:/K8s/my_k8s_secret"
No
podAdditionalLabelsAdditional labels to be set on deployment pods. Those are set plain yaml object i.e:

"podAdditionalLabels":
··"mykey": "myvalue"
No
podAdditionalAnnotationsAdditional annotations to be set on deployment pods. Those are set plain yaml object i.e:

"podAdditionalAnnotations":
··"mykey": "myvalue"
No
agentAdditionalLabelsAdditional labels to be set on agent (trigger/proxy) pods. Those are set plain yaml object i.e:

"agentAdditionalLabels":
··"mykey": "myvalue"
No
agentAdditionalAnnotationsAdditional annotations to be set on agent (trigger/proxy) pods. Those are set plain yaml object i.e:

"agentAdditionalAnnotations":
··"mykey": "myvalue"
No
customSecretscustomSecrets:
- envVarName: MY_SECRET
··secretName: my-secrets-1
··key: db_password
Mount custom secretsNo
customSecretMountscustomSecretMounts:
- volumeName: my-secrets-1
··secretName: my-secrets-1
··mountPath: /opt/secret1
Mount secret files to given a mountPathNo
environmentOutputEncryptionKeyAn encryption key used for encrypt and decrypt of Environments OutputsUse environment outputs as inputsNo
env0ConfigSecretNameA Kubernetes Secret name. Can be used to provide sensitive values listed in this table, instead of providing them in the Helm values files.No

📘

Base64 Encoding Values

To ensure no additional new line characters are being encoded, please use the following command in your terminal:
echo -n $VALUE | base64

📘

Storing Secret Values as Kubernetes Secret

Some of the configuration values listed above are sensitive.
As an alternative to setting them in your values.customer.yml, you can provide sensitive keys and values from your own Kubernetes Secret prior to the agent's installation/upgrade.

Setting the env0ConfigSecretName will instruct the agent to extract the needed values from the given Kubernetes Secret and will override values from the Helm value files.

The Kubernetes Secret must be accessible from the agent and sensitive values must be Base64 encoded.

Further Configuration

The env0 agent externalizes a wide array of values that may be set to configure the agent.

We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required.

For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.

Job Limits

You may desire to add a limit on the number of concurrent runs. To do so add a "Resource Quota" for the agent namespace with a parameter on count/jobs.batch See: https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.

Installation

  1. Add our Helm Repo

    helm repo add env0 https://env0.github.io/self-hosted
    
  2. Update Helm Repo

    helm repo update
    
  3. Download the configuration file: <your_agent_key>_values.yaml from Organization Settings -> Agents tab

  1. Install the Helm Charts
    helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml
    # values.customer.yaml should contain any optional configuration options as detailed above
    

📘

TF example

Example for helm install

Upgrade

helm upgrade env0-agent env0/env0-agent --namespace env0-agent

📘

Upgrade Changes

You were previously requested to download the values.yaml file, which is not required anymore for an upgrade.

🚧

Custom Agent Docker Image

If you extended the docker image on the agent, you should update the agent version in your custom image as well.

Verify Installation/Upgrade

After installing a new version of the env0 agent helm chart is is highly recommended to verify the installation by running:

helm test env0-agent --namespace env0-agent --logs --timeout 1m

Outbound Domains

The agent needs the following outbound domains access:

WildcardUsed by
*.env0.com, *.amazonaws.comenv0 SaaS platform, the agent needs to communicate with the SaaS platform.
ghcr.ioGitHub Docker registry which holds the Docker container of the agent.
*.hashicorp.comDownloading Terraform binaries
registry.terraform.ioDownloading public modules from the Terraform Registry
registry.opentofu.orgDownloading public modules from the OpenTofu Registry
github.com, gitlab.com, bitbucket.orgGit VCS providers ( ports 22, 9418, 80, 443 )
api.github.comTerragrunt installation
*.infracost.ioCost estimation by Infracost
  • Make sure to allow access to your cloud providers, VCS domains, and any other tool that creates an outbound request.

🚧

Firewall Rules

Note that if you have the cluster behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and its corresponding Public IP.