Self Hosted Kubernetes Agent
Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.
- Execution is contained within your own servers/infrastructure
- The agent requires an internet connection but no inbound network access.
- Secrets can be stored on your own infrastructure.
Feature Availability
Self-hosted agents are only available to Business and Enterprise level customers. Click here for more details
Requirements
- Kubernetes cluster at version >= 1.24
- Autoscaler
- Persistent Volume/Storage Class(Optional)
- AMD64 or ARM64-based nodes. Note: for ARM-based architecture - you must extend from our Lean Image
- The agent will be installed using a Helm chart.
Cluster Installation
The Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.
Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.
Autoscaler (recommended, but optional)
- While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env0 usage. Otherwise, your deployment concurrency will be limited by the cluster's capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
- The env0 agent will create a new pod for each deployment you run on env0.
Pods are ephemeral and will be destroyed after a single deployment. - A pod running a single deployment requires at least
cpu: 460m
andmemory: 1500Mi
, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation. - Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.
For the EKS cluster, you can use this TF example.
Persistent Volume/Storage Class (optional)
- env0 will store the deployment state and working directory on a persistent volume in the cluster.
- Must support Dynamic Provisioning and ReadWriteMany access mode.
- The requested storage space is
300Gi
. - The cluster must include a
StorageClass
namedenv0-state-sc
. - The Storage Class should be set up with
reclaimPolicy: Retain
, to prevent data loss in case the agent needs to be replaced or uninstalled.
We recommend the current implementations for the major cloud providers:
Cloud | Solution |
---|---|
AWS | EFS CSI For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass |
GCP | Filestore, OpenSource NFS |
Azure | Azure Files |
PVC Alternative
By default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env0-Hosted Encrypted State with
env0StateEncryptionKey
.
Sensitive Secrets
- Using secrets stored on the env0 platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
- Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
- If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
- This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
- In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
- Storing secrets is supported using these secret stores:
Secret store | Secret reference format | Secret Region & Permissions |
---|---|---|
AWS Secrets Manager (us-east-1) | ${ssm:<secret-name>} | Set by the awsSecretsRegion helm value. Defaults to us-east-1 Role must have permissions: secretsmanager:GetSecretValue |
GCP Secrets Manager | ${gcp:<secret-id>} | Your GCP project's default region Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission 'secrets.versions.access' is required |
Azure Key Vault | ${azure:<secret-name>@<vault-name>} | Your Azure subscription's default region |
HashiCorp Vault | ${vault:<path>.<key>@<namespace>} where @<namespace> is optional |
Custom/Optional Configuration
A Helm values.yml
will be provided by env0 with the configuration env0 provides.
The customer will need to provide a values.customer.yml
with the following values (optional), to enable specific features:
Keys | Description | Required for feature | Can be provided via a Kuberentes Secret? |
---|---|---|---|
dockerImage agentImagePullSecret | Custom Docker image URI and Base64 encoded .dockerconfigjson contents | Custom Docker image. See Using a custom image in an agent | No |
agentImagePullSecretRef | A reference to a k8s secret name that holds a Docker pull image token | Custom Docker image that is hosted on a private Docker registry | No |
infracostApiKeyEncoded | Base64 encoded Infracost API key | Cost Estimation | Yes:INFRACOST_API_KEY |
assumerKeyIdEncoded assumerSecretEncoded | Base64 encoded AWS Access Key ID & Secret | AWS Assume role for deploy credentials. Also, see Authenticating the agent on AWS EKS | Yes:ASSUMER_ACCESS_KEY_ID ASSUMER_SECRET_ACCESS_KEY |
limits.cpu limits.memory | Container resource limits Read more about resource âallocation Recommended cpu: 1.5 and memory: 3Gi | Custom deployment pod size | No |
requests.cpu requests.memory | Container resource requests. Recommended cpu: 1.5 and memory: 3Gi | Custom deployment container resources | No |
tolerations | An array of toleration objects to apply to all pods. see docs | Custom tolerations | No |
deploymentTolerations | An array of toleration objects - targeting the deployment pods. This will override the default tolerations for deployment pods. | Custom tolerations | No |
affinity | Allows you to constrain which nodes env0 pods are eligible to be scheduled on. see docs | Custom node affinity | No |
deploymentAffinity | Affinity for deployment pods. This will override the default affinity for deployment pods. | Custom node affinity | No |
customerAwsAccessKeyIdEncoded customerAwsSecretAccessKeyEncoded awsSecretsRegion | Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permission | Using AWS Secrets Manager to store secrets for the agent | Yes:CUSTOMER_AWS_ACCESS_KEY_ID CUSTOMER_AWS_SECRET_ACCESS_KEY |
customerGoogleProject customerGoogleCredentials | Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role. | Using GCP Secret Manager to store secrets for the agent. These credentials are not used for the deployment itself. If deploymentJobServiceAccountName is set - Workload identity will override any supplied credentials. | Yes:CUSTOMER_GOOGLE_PROJECT CUSTOMER_GOOGLE_CREDENTIALS |
customerAzureClientId customerAzureClientSecret customerAzureTenantId | Base64 encoded Azure Credentials. | Using Azure Key Vault Secrets to store secrets for the agent | Yes:CUSTOMER_AZURE_CLIENT_ID CUSTOMER_AZURE_CLIENT_SECRET CUSTOMER_AZURE_TENANT_ID |
customerVaultTokenEncoded customerVaultUrl | (Deprecated) Base64 encoded HCP Vault token, and the Vault's URL (also base64 encoded) | Using HCP Vault to store secrets for the agent | Yes:CUSTOMER_VAULT_TOKEN CUSTOMER_VAULT_ADDRESS |
vault | Set HCP Vault authentication. First, set the cluster's URL: address: (equivalent to VAULT_ADDR)loginPath: "<LoginCustomPath>" - (Optional)Then, you should choose one of the following login method: - By Vault token method: "token" encodedToken: "<vault-token>" - By Username & Password method: "password" username: "<username>" encodedPassword: "<base64Encoded password>" - By Certificate method: "certificate" role: "<certificate name>" clientCertificateSecretName: "<secret-name>" ** The secret must created in the same namespace with specify keys (client-cert, client-key) for example: kubectl create secret generic <secret-name> -n env0-agent --from-file=client-key=client.key --from-file=client-cert=client.crt ** For Certification Authority (CA) you can choose between: ^ caDisable: true ^ caCertificateSecretName: "<secret-name>" with the key - ca-cert ^ Using CA certificate with customCertificates - By Role & Service Account Token(JWT) method: "service_account" role: "<vault role name>" ** The JWT is supposed to be supplied by kubernetes in the following path: /var/run/secrets/kubernetes.io/serviceaccount/token\ You can read more about service accounts here. Communication is based on v1 HTTP API | Using HCP Vault to store secrets for the agent | No |
bitbucketServerCredentialsEncoded | Base64 Bitbucket server credentials in the format `username:token (using a Personal Access token) | On-premise Bitbucket Server installation | Yes:BITBUCKET_SERVER_CREDENTIALS |
gitlabEnterpriseCredentialsEncoded | Base64 Gitlab Enterprise credentials in the form of a Personal Access token | On-premise Gitlab Enterprise installation | Yes:GITLAB_ENTERPRISE_CREDENTIALS |
gitlabEnterpriseBaseUrlSuffix | In cases where your GitLab instance base url is not at the root of the url, URL but on a separate path, e.g.,https://gitlab.acme.com/prod you should define that added suffix to this valuegitlabEnterpriseBaseUrlSuffix=prod | On-premise Gitlab Enterprise installation | No |
githubEnterpriseAppId githubEnterpriseAppClientId githubEnterpriseAppInstallationId githubEnterpriseAppClientSecretEncoded githubEnterpriseAppPrivateKeyEncoded | GitHub Enterprise Integration (see step 3) | On-premise GitHub Enterprise installation | Yes:GITHUB_ENTERPRISE_APP_CLIENT_SECRET GITHUB_ENTERPRISE_APP_PRIVATE_KEY |
allowedVcsUrlRegex | When set, cloning a git repository will only be permitted if the git url matches the regular expression set. | VCS URL Whitelisting | No |
customCertificates | An array of strings. Each represents a name of Kubernetes secret that contains custom CA certificates. Those certificates will be available during deployments. | Custom CA Certificates. More details here | No |
gitSslNoVerify | When set to true , cloning a git repo will not verify SSL/TLS certs | Ignoring SSL/TLS certs for on-premise git servers | No |
storageClassName | Ability to change the default PVC storage class name for env0 self-hosted agent | the default isenv0-state-sc Please note: When changing this you should also change your storage class name to match this configuration | No |
deploymentJobServiceAccountName | Customize the Kubernetes service account used by the deployment pod. Primarily for pod-level IAM permissions | the default is default | No |
jobHistoryLimitFailure jobHistoryLimitSuccess | The number of successful and failed deployment jobs should be kept in the Kubernetes cluster history | The default is 10 for each value | No |
strictSecurityContext | When set to true , the pod operates under node user instead of root | Increased agent pod security | No |
env0StateEncryptionKey | A base64 encoded string (password). When set, deployment state and working directory will be encrypted and persisted on Env0's end | Env0-Hosted Encrypted State | Yes:ENV0_STATE_ENCRYPTION_KEY |
environmentOutputEncryptionKey | A base64 encoded string (password). Used to enable the "Environment Outputs" feature | See notes in Environment Outputs | No |
logger | Logger configlevel - debug/info/warn/errorformat - json/cli | No | |
agentImagePullPolicy | Set imagePullPolicy attribute - Always/Never/IfNotPresent | No | |
agentProxy | Agent's Proxy pod config:install - true/falsereplicas - how many replicas of the agent proxy to use. Default is 1maxConcurrentRequests - how many concurrent requests each pod should handle. Default is 500limits - k8s (cpu and memory) limits. Default is 250m and 500Mi | No | |
deploymentPodWarmPoolSize | A number of deployment pods that should be left "warm" (running & idle) and ready for new deployments | No | |
podAdditionalEnvVars | Additional Environment variables to be passed to the agent pods, which will also be passed to the deployment process. These are set as a plain yaml object, i.e.:"podAdditionalEnvVars": ¡¡"MY_SECRET": akeyless:/K8s/my_k8s_secret" | No | |
podAdditionalLabels | Additional labels to be set on deployment pods. Those are set as a plain yaml object, i.e.:"podAdditionalLabels": ¡¡"mykey": "myvalue" | No | |
podAdditionalAnnotations | Additional annotations to be set on deployment pods. These are set as a plain yaml object, i.e.:"podAdditionalAnnotations": ¡¡"mykey": "myvalue" | No | |
agentAdditionalLabels | Additional annotations to be set on agent (trigger/proxy) pods. These are set as a plain yaml object, i.e.:"agentAdditionalLabels": ¡¡"mykey": "myvalue" | No | |
agentAdditionalAnnotations | Additional annotations to be set on agent (trigger/proxy) pods. Those are set plain yaml object i.e:"agentAdditionalAnnotations": ¡¡"mykey": "myvalue" | No | |
customSecrets | customSecrets: - envVarName: MY_SECRET ¡¡secretName: my-secrets-1 ¡¡key: db_password | Mount custom secrets | No |
customSecretMounts | customSecretMounts: - volumeName: my-secrets-1 ¡¡secretName: my-secrets-1 ¡¡mountPath: /opt/secret1 | Mount secret files to given a mountPath | No |
env0ConfigSecretName | A Kubernetes Secret name. Can be used to provide sensitive values listed in this table, instead of providing them in the Helm values files. | No | |
customRoleForOidcAwsSsm.duration customRoleForOidcAwsSsm.arn | Custom role for AWS SSM secret fetching, Note: only used when useOidcForAwsSsm=true | No | |
useOidcForAwsSsm | When set totrue the agent will be instructed to authenticate to AWS SSM using env0 OIDC. | No |
Base64 Encoding Values
To ensure no additional new line characters are being encoded, please use the following command in your terminal:
echo -n $VALUE | base64
Storing Secret Values as Kubernetes Secret
Some of the configuration values listed above are sensitive.
As an alternative to setting them in yourvalues.customer.yml
, you can provide sensitive keys and values from your own Kubernetes Secret prior to the agent's installation/upgrade.Setting the
env0ConfigSecretName
will instruct the agent to extract the needed values from the given Kubernetes Secret and will override values from the Helm value files.The Kubernetes Secret must be accessible from the agent, and sensitive values must be Base64 encoded.
Further Configuration
The env0 agent externalizes a wide array of values that may be set to configure the agent.
We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required.
For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.
Job Limits
You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter on count/jobs.batch
See: https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.
Installation
-
Add our Helm Repo
helm repo add env0 https://env0.github.io/self-hosted
-
Update Helm Repo
helm repo update
-
Download the configuration file:
<your_agent_key>_values.yaml
from Organization Settings -> Agents tab
- Install the Helm Charts
helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml # values.customer.yaml should contain any optional configuration options as detailed above
Usage of
helm template
commandPlease note that if you choose to render the Helm template first and apply it using kubectl (a common practice with ArgoCD), the feature that checks the Kubernetes secret defined by the env0ConfigSecretName Helm value to determine whether the PVC should be created will not function. This feature relies on an active connection to the cluster
TF example
Example for helm install
Upgrade
helm upgrade env0-agent env0/env0-agent --namespace env0-agent
Upgrade Changes
Previously, you would have had to download the values.yaml file. This is no longer required for an upgrade. However, we do recommend keeping the version of the values.yaml file you used to install the agent with, in case a rollback is required during the upgrade progress.
Custom Agent Docker Image
If you extended the docker image on the agent, you should update the agent version in your custom image as well.
Verify Installation/Upgrade
After installing a new version of the env0 agent helm chart, it is highly recommended to verify the installation by running:
helm test env0-agent --namespace env0-agent --logs --timeout 1m
Outbound Domains
The agent needs the following outbound domains access:
Wildcard | Used by |
---|---|
*.env0.com, *.amazonaws.com | env0 SaaS platform, the agent needs to communicate with the SaaS platform |
ghcr.io | GitHub Docker registry which holds the Docker container of the agent |
*.hashicorp.com | Downloading Terraform binaries |
registry.terraform.io | Downloading public modules from the Terraform Registry |
registry.opentofu.org | Downloading public modules from the OpenTofu Registry |
github.com, gitlab.com, bitbucket.org | Git VCS providers ( ports 22, 9418, 80, 443 ) |
api.github.com | Terragrunt installation |
*.infracost.io | Cost estimation by Infracost |
openpolicyagent.org | Installing Open Policy Agent, used for Approval Policies |
- Make sure to allow access to your cloud providers, VCS domains, and any other tool that creates an outbound request.
Firewall Rules
Note that if your cluster is behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and corresponding Public IP.
Updated 4 days ago
For more advanced use cases for the self hosted agent see: