Self Hosted Kubernetes Agent
Self-hosted agents allow you to run env0 deployment workloads on your own Kubernetes cluster.
- Execution is contained on your own servers/infrastructure.
- The agent requires an internet connection but no inbound network access.
- Secrets can be stored on your own infrastructure.
Feature Availability
Self-hosted agents are only available to Business and Enterprise level customers. Click here for more details
Requirements
- Kubernetes cluster at version >= 1.16
- The agent will be installed using a Helm chart.
Installation Tip
Use our repo k8s-modules which contains Terraform code for an easier cluster installation. You can use the main provider folder for a "full-blown" installation or a specific module to complete the requirements.
Autoscaler
- The env0 agent will scale pods up and down according to deployment usage. Please notice, the ability to scale cluster nodes up and down must be provided.
- A pod running a single deployment requires
cpu: 460m
andmemory: 1500Mi
, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation. - Minimum node requirements: an instance with at least 2 CPU and 8GiB memory
For the EKS cluster, you can use this TF example
Persistent Volume / Storage Class
- env0 will store local Terraform state and deployment artifacts, on a persistent volume in the cluster.
- Must support Dynamic Provisioning and ReadWriteMany access mode.
- The requested storage space is
300Gi
. - The cluster must include a
StorageClass
namedenv0-state-sc
. - The StorageClass should be set up with
reclaimPolicy: Retain
, so in case the agent needs to be replaced or uninstalled, data won't be lost.
We recommend the current implementations for the major cloud providers:
Cloud | Solution |
---|---|
AWS | EFS CSI For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass |
GCP | Filestore |
Azure | Azure Files |
Sensitive Secrets
- As Self Hosted agents allow you to store secrets on your own infrastructure, using secrets stored in the env0 platform is not allowed for self-hosted agents.
- If you are migrating from the SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
- This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
- In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed below.
- Storing secrets is supported using these secret stores:
Secret store | Secret reference format | Supported Region | |
---|---|---|---|
AWS Secrets Manager (us-east-1) | ${ssm:<secret-name>} | Set by the awsSecretsRegion helm value. Defaults to us-east-1 | |
GCP Secrets Manager | ${gcp:<secret-id>} | Your GCP project's default region | Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission 'secrets.versions.access' is required. |
Azure Key Vault | ${azure:<secret-name>@<vault-name>} | Your Azure subscription's default region | |
HashiCorp Vault | ${vault:<path>.<key>@<namespace>} where @<namespace> is optional | AWS regions in the U.S, Europe, Singapore, and Australia |
Custom/Optional Configuration
A Helm values.yml
will be provided by env0 with the configuration env0 provides.
The customer will need to provide a values.customer.yml
with the following values (optional), to enable specific features :
Keys | Description | Required for feature |
---|---|---|
dockerImage agentImagePullSecret | Custom Docker image URI and Base64 encoded .dockerconfigjson contents | Custom Docker image. See Using a custom image in an agent |
infracostApiKeyEncoded | Base64 encoded Infracost API key | Cost Estimation |
assumerKeyIdEncoded assumerSecretEncoded | Base64 encoded AWS Access Key ID & Secret | AWS Assume role for deploy credentials. Also, see Authenticating the agent on AWS EKS |
limits.cpu limits.memory | Container resource limits Read more about resource allocation | Custom deployment pod size |
requests.cpu requests.memory | Container resource requests | Custom deployment container resources |
tolerations | An array of toleration objects to apply to the deployment containers. | Custom tolerations |
affinity | Allows you to constrain which nodes env0 pods are eligible to be scheduled on (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) | Custom node affinity |
deploymentAffinity | Affinity for deployment pods. This will override the default affinity for deployment pods. | Custom node affinity |
customerAwsAccessKeyIdEncoded customerAwsSecretAccessKeyEncoded | Base64 encoded AWS Access Key ID & Secret. Requires the secretsmanager:GetSecretValue permission. | Using AWS Secrets Manager to store secrets for the agent |
customerGoogleProject customerGoogleCredentials | Base64 encoded GCP project name and JSON service-key contents. Requires the Secret Manager Secret Access role. | Using GCP Secret Manager to store secrets for the agent. These credentials are not used for the deployment itself. If deploymentJobServiceAccountName is set - Workload identity will override any supplied credentials. |
customerAzureClientId customerAzureClientSecret customerAzureTenantId | Base64 encoded Azure Credentials. | Using Azure Key Vault Secrets to store secrets for the agent |
customerVaultTokenEncoded customerVaultUrl | Base64 encoded HCP Vault token, and the cluster's URL (also base64 encoded) | Using HCP Vault to store secrets for the agent |
bitbucketServerCredentialsEncoded | Base64 Bitbucket server credentials in the format username:token (using a Personal Access token). | On-premise Bitbucket Server installation. |
gitlabEnterpriseCredentialsEncoded | Base64 Gitlab Enterprise credentials in the form of a Personal Access token. | On-premise Gitlab Enterprise installation |
gitlabEnterpriseBaseUrlSuffix | In cases where your GitLab instance base url is not at the root of the url, and in a separate path, e.g https://gitlab.acme.com/prod you should define that added suffix to this valuegitlabEnterpriseBaseUrlSuffix=prod | On-premise Gitlab Enterprise installation |
githubEnterpriseAppId githubEnterpriseAppClientId githubEnterpriseAppInstallationId githubEnterpriseAppClientSecretEncoded githubEnterpriseAppPrivateKeyEncoded | Github Enterprise Integration (see step 3) | On-premise GitHub Enterprise installation |
allowedVcsUrlRegex | When set, cloning a git repository will only be permitted if the git url matches the regular expression set. | VCS URL Whitelisting |
customCertificates | An array of strings. Each represents a name of Kubernetes secret that contains custom CA certificates. Those certificates will be available during deployments. | Custom CA Certificates. More details here. |
gitSslNoVerify | When set to true , cloning a git repo will not verify SSL/TLS certs | Ignoring SSL/TLS certs for on-premise git servers. |
storageClassName | Ability to change the default PVC storage class name for env0 self-hosted agent | the default is env0-state-sc Please pay attention, when you change this - you should also change your storage class name to match this configuration |
deploymentJobServiceAccountName | Customize the Kubernetes service account used by the deployment pod. Primarily for pod-level IAM permissions. | the default is default |
jobHistoryLimitFailure jobHistoryLimitSuccess | How many successful and failed deployment jobs should be kept in the Kubernetes cluster history. | The default is 10 for each one. |
strictSecurityContext | When set to true , the pod operates under node user instead of root . | Increased agent pod security |
Base64 Encoding Values
To ensure no additional new line characters are being encoded, please use the following command in your terminal:
echo -n $VALUE | base64
Further Configuration
The env0 agent externalizes a wide array of values that may be set to configure the agent.
We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required.
For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.
Installation
-
Add our Helm Repo
helm repo add env0 https://env0.github.io/self-hosted
-
Update Helm Repo
helm repo update
-
Download the configuration file:
<your_agent_key>_values.yaml
from Organization Settings -> Agents tab
- Install the Helm Charts
helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml # values.customer.yaml should contain any optional configuration options as detailed above
TF example
Example for helm install
Upgrade
helm upgrade env0-agent env0/env0-agent --namespace env0-agent
Upgrade Changes
You were previously requested to download the values.yaml file, which is not required anymore for an upgrade.
Custom Agent Docker Image
If you extended the docker image on the agent, you should update the agent version in your custom image as well.
Verify Installation/Upgrade
After installing a new version of the env0 agent helm chart is is highly recommended to verify the installation by running:
helm test env0-agent --namespace env0-agent --logs --timeout 1m
Outbound Domains
The agent needs the following outbound domains access:
Wildcard | Used by |
---|---|
*.env0.com, *.amazonaws.com | env0 SaaS platform, the agent needs to communicate with the SaaS platform. |
ghcr.io | GitHub Docker registry which holds the Docker container of the agent. |
*.hashicorp.com | Downloading Terraform binaries |
registry.terraform.io | Downloading public modules from the Terraform Registry |
github.com, gitlab.com, bitbucket.org | Git VCS providers ( ports 22, 9418, 80, 443 ) |
api.github.com | Terragrunt installation |
*.infracost.io | Cost estimation by Infracost |
- Make sure to allow access to your cloud providers, VCS domains, and any other tool that creates an outbound request.
Firewall Rules
Note that if you have the cluster behind a managed firewall, you might need to whitelist the Cluster's API server's FQDN and its corresponding Public IP.
Updated 1 day ago
For more advanced use cases for the self hosted agent see: