Install GridOS Connect
This is an installation guide for GridOS Connect deployed on Foundation environments. This documentation is intended for customer deployments of GridOS.
1. Deployment
1.1. Prerequisites
1.1.2. Foundation
-
GridOS Connect has been deployed and validated on Foundation version
25r10with non-pdi-k8s-version:3.0.0Uv1.34.1. To ensure compatibility and prevent deployment issues, we strongly recommend using Foundation version25r10. Deploying GridOS Connect on older Foundation versions may result in unexpected errors. For more details on potential issues, refer to troubleshooting section.
Connect 1.15.0 introduced a hard requirement for Foundation version 25r04 or higher. If you are using an older version of Foundation, upgrade to at least 25r04 before proceeding with the Connect installation.
|
Connect 1.20.0 aligns with APISIX secret changes in Foundation 25r09. If you deploy Connect 1.20.0 or later with an older Foundation version or vice versa, you must apply the overrides described in Connect Identity Reconciler Fails to Start Due to Renamed APISIX Secret.
|
-
Your
kubectl current-contextmust be set to your Foundation environment.
1.1.3. Artifactory Access
You will need access to GridOS Connect Artifactory repositories hosted at GE Digital Grid Artifactory.
1.2. Configure Foundation
At the time of writing, a base Foundation installation will require additional configuration to work with GridOs Connect.
-
GridOS Connect relies on Zitadel, provided by Foundation.
-
Download the Foundation YAML configuration.
-
The Foundation configuration needs to go into your Foundation Helm values. Depending on how you deploy Foundation, this could be in a
local-overrides.yamlfile or something similar. -
The Foundation data loader should specify auth configuration. See Auth Configuration for more details.
| If you have problems or uncertainties regarding how to apply the Foundation configuration, contact Foundation support. |
1.2.1. Enable CPU Throttling for Kubernetes
When running GridOS Connect we require Kubernetes CPU throttling to be enabled in order for our services to have predictable performance and behavior. This can be done as part of the Foundation installation, or be applied to an existing cluster.
Apply to Existing Cluster
SSH to Virtual machine that host kubernetes node
# Edit this file on all nodes
vi /etc/rancher/rke2/config.yaml
# Change cpu-cfs-quota to true, it was set to false!
kubelet-arg:
- "cpu-cfs-quota=true"
# Restart rke2-server
sudo systemctl restart rke2-server
# Restart rke2-worker node if applicable
sudo systemctl restart rke2-agent
Apply during Foundation Installation using PDI
See example manifest in the Foundation docs.
Set the property cpu_limits_enforcement to true under the k8s param_group with id k8s_all_group_vars, see example snippet below.
...
- id: k8s_all_group_vars
params:
cpu_limits_enforcement: true
...
Apply during Foundation Installation without using PDI
See Foundation installation documentation for more information. You need to change the value of cpu_limits_enforcement to true.
1.2.2. Configure Local DNS
You will need to configure your Domain Name Service by mapping domains to your externally accessible IP address, referred to as: worker-ip.
-
Find your “worker-ip” by running the command:
kubectl get nodes -o wideand select the node without thecontrol-planerole. In a POSIX[2] shell you can do:kubectl get nodes -o wide | grep -v control-plane | tail -n 1 | awk '{ print $6 }' -
You will need to configure your DNS for the following domains:
{worker-ip} a6dashboard.YOUR_CONNECT_DOMAIN {worker-ip} admin.YOUR_CONNECT_DOMAIN {worker-ip} api.YOUR_CONNECT_DOMAIN {worker-ip} console.YOUR_CONNECT_DOMAIN {worker-ip} service.YOUR_CONNECT_DOMAIN {worker-ip} zitadel.YOUR_CONNECT_DOMAINIf you are using a Connect Agent or mTLS service account, you must add a DNS entry matching the below pattern:
{worker-ip} {org-id}.mtls.YOUR_CONNECT_DOMAINExample:
10.227.49.xxx gridos.mtls.env-connect-mvp-ingress.local
1.2.3. TLS
Ensure that your Foundation environment has correctly configured TLS. If a private Certificate Authority (CA) is used for this environment, make sure you have configured the chain of trust on your system.
1.2.4. Auth Configuration
Authorization and authentication (auth) in Foundation is managed by an Identity Provider (IDP). This is described in the Foundation documentation sections:
In order for a user to be granted access to GridOS connect, you will need the following:
-
A user account in your IDP - this is the user you will use for logging in to the Connect Console or deploy flows
-
A Role Manager Permission created specifically for the GridOS Connect role you need
-
A Role Manager Role
-
A Role Manager Usergroup
-
An AD/LDAP user group - where your IDP account is a member of this group
-
A Role Manager Mapping between the Role Manager Usergroup and the AD/LDAP user group
All of these, except one, must be created following the Foundation documentation linked above.
The creation of the permission is done by following the subsequent section.
GridOS Connect Role as a Role Manager Permission
Familiarize yourself with the available GridOS Connect roles:
| Role | Description |
|---|---|
Admin |
A user with full integration permissions (read and write). |
Agent |
A user needed for the GridOS Connect Agent to be able to communicate with the GridOS Connect main cluster. |
Monitor |
A "read-only" user that can view flow traces and flow details but is not allowed to make edits. |
For each Connect role you want to utilize, you will need to create a new permission in the Role Manager.
|
In order for this Role Manager Permission to take effect, you will need to ensure the following:
|
The Connect Role Manager Permission is defined with a string value with a specific format:
connect.<ORG_ID>.<GRIDOS_CONNECT_ROLE>
-
The prefix:
connect.is required. -
<ORG_ID>is a string value that makes sense to the specific tenant organization. E.g if you want to add permissions to GridOS Connect for Acme Corp, the org id can beacme. -
<GRIDOS_CONNECT_ROLE>is a lowercase string value matching one of the above mentioned GridOS Connect roles.
The <ORG_ID> is referenced as owner-id in Deployer section.
|
1.2.5. Create a UserGroup for the Connect Identity Reconciler
The Connect Identity Reconciler application requires a UserGroup mapping for connect-identity-reconciler to exist, and it needs to be associated with the following permissions:
-
roleManager.userGroups.read.readAll -
roleManager.permissions.read.readAll -
roleManager.roles.read.readAll
You achieve this by doing the following in the security admin console:
-
Add a Role called
Connect Identity Reconcilerand map the permissions listed above to it. -
Add a Usergroup called
Connect Identity Reconcilerand set the Mapped GroupName field toconnect-identity-reconciler. -
Add the
Connect Identity ReconcilerGroup to the Mapped Roles of the new Usergroup.
1.3. Deploy Connect
1.3.1. Prepare for Deployment
The Connect installation includes the main chart, connect, and three dependency charts: connect-postgresql, connect-victoria-metrics, and connect-openbao.
These charts, along with the required value override and auxiliary deployment files, are packaged into a ZIP artifact called the Helm Deployment Template.
-
Download the Helm Deployment Template to your local machine.
-
Unpack the ZIP file.
-
Review the Charts’ values and identify the required customizations, with particular attention to resource allocations. For detailed guidance on applying and managing these customizations, see recommendations.
-
Set the
kube-contextandnamespacefor Connect installation.# Unix KUBE_CONTEXT=<KUBE_CONTEXT> CONNECT_NS=<CONNECT_NAMESPACE># Windows - PowerShell $KUBE_CONTEXT=<KUBE_CONTEXT> $CONNECT_NS=<CONNECT_NAMESPACE> -
Continue with either Deploy with Helm or Deploy with Helmfile.
Ensure that you set (override) the value of the property global.clusterExternalUrl to the externally available service domain in the form of an HTTPS URL (e.g., https://YOUR_CONNECT_DOMAIN).
|
1.3.2. Deploy with Helm
To ensure correct installation, a file is provided in the Helm Deployment Template that specifies the required arguments for the helm upgrade command.
Install the dependency charts (connect-postgresql, connect-victoria-metrics, and connect-openbao) first, followed by the connect chart.
|
| When installing a chart, the release name must match the chart name. |
The following steps should be performed when installing each chart:
-
Navigate to the unpacked ZIP folder.
-
Set the
release name.# Unix RELEASE_NAME="connect"# Windows - PowerShell $RELEASE_NAME="connect" -
Set the
chart archive referencewith the value provided inconnect-charts.csv.# Unix CHART_ARCHIVE_PATH=$(grep "$RELEASE_NAME," connect-charts.csv | cut -d, -f3)# Windows - PowerShell $CHART_ARCHIVE_PATH=$(Get-Content connect-charts.csv | Select-String -Pattern "$RELEASE_NAME," | ForEach-Object { ($_ -split ',')[2].Trim() }) -
Install the Helm chart.
helm upgrade -i \ --kube-context $KUBE_CONTEXT \ -n $CONNECT_NS \ $RELEASE_NAME \ $CHART_ARCHIVE_PATH \ --values "$RELEASE_NAME/values.yaml" \ --set "global.clusterExternalUrl=https://YOUR_CONNECT_DOMAIN" \ --wait-for-jobs \ --wait \ --timeout=15m
1.3.3. Deploy with Helmfile
-
Helmfile diffs should be run in a dry-run mode, which can be set with the environment variable:
HELM_DIFF_USE_UPGRADE_DRY_RUN=true.
-
Run
helmfile apply:helmfile --kube-context $KUBE_CONTEXT -n $CONNECT_NS apply --set "global.clusterExternalUrl=https://YOUR_CONNECT_DOMAIN"
1.4. Post-Deployment: Vault to OpenBao Migration
|
In this release, the If Vault was installed as part of a previous Connect deployment, uninstall it after confirming that the OpenBao deployment is functioning correctly. For more instructions, see Uninstall Connect Vault. |
1.5. Restart Fluentbit Pods
-
Restart fluenbit pods
kubectl delete po -n foundation-cluster-monitoring -l app.kubernetes.io/name=fluent-bit -
Restart flowserver pods
kubectl delete po -n foundation-env-default -l app.kubernetes.io/name=flowserver
For more information, see Flow-traces/Logs workaround.
1.6. Create a Zitadel Service User
If you intend to authenticate non-human (machine) users using OAuth2 with Client Credentials Grant, you will need to create one or more Zitadel Service Users by following the steps below.
|
Example use cases for non-human users are:
|
-
Save the following YAML to a file, e.g
connect-deployer.yaml:apiVersion: v1 kind: ConfigMap metadata: name: connect-deployer namespace: foundation-cluster-zerotrust labels: zitadel.gevernova.com/track: "true" data: name: connect-deployer org: foundation # Zitadel's Organization name, in which this machine user will be created secretName: connect-deployer # K8s Secret name in which the clientId and autogenerated clientSecret will be stored into # List of mappedGroupName values from roles-manager DB, that are configured through roles-config file (Ref: https://github.build.ge.com/grid-foundation/foundation-reference-app/blob/master/release/foundation-data/config/roles-config-v2.yaml) groups: |- - <ROLE_MANAGER_USERGROUP> -
You should know that the
groupsentries in the ConfigMap YAML are referring to the Role Manager Usergroups described in the Role Manager section. This will grant the service user with all the permissions associated with this Role Manager Usergroup. -
Apply the ConfigMap.
kubectl apply -n foundation-cluster-zerotrust -f connect-deployer.yaml
To use the deployer configuration secret, see OAuth deployer configuration for Connect on Foundation.
|
For more information, see Foundation documentation. |
Flow developers who want to deploy flows using a Zitadel service user will need three pieces of information, which can be derived with the following kubectl commands:
kubectl get secrets -n foundation-cluster-zerotrust connect-deployer -o template='{{.data.clientId | base64decode}}'
kubectl get secrets -n foundation-cluster-zerotrust connect-deployer -o template='{{.data.clientSecret | base64decode}}'
kubectl get configmap -n foundation-cluster-zerotrust zitadel-projects -o jsonpath='{.data.foundation\.foundation\.id}'
| These secrets should be distributed to flow developers in a secure manner! |
1.7. Conclusion
You have now reached the end of the installation process for Connect on Foundation. If you want to deploy a new integration flow on Connect, proceed to the next section.
2. Resource Allocations
The Helm Deployment Template includes a values.yaml file for each chart. Review and override the default resource allocations as needed for your use case.
| The service resource values are set conservatively. Since the Connect team cannot anticipate customer-specific requirements, these resource allocations must be reviewed carefully. Misconfiguration can lead to broken clusters, poor performance, or excessive compute costs. |
3. Chart Value Override Recommendations
After you extract the Helm Deployment Template, you can view the default chart values for each chart by running the following Helm command:
helm show values $CHART_ARCHIVE_PATH
See Deploy with Helm for details on how to resolve CHART_ARCHIVE_PATH.
|
It is recommended that you maintain custom value overrides in separate files stored in version control.
Defining value override files separately makes it easier to apply them while following the standard install or upgrade instructions.
For example, to override the flow server resource allocations for the connect chart:
-
Create a separate values file
values.resource.yaml.# contents of values.resource.yaml flowserver: replicas: 3 resources: requests: cpu: 1.0 memory: 2Gi limits: memory: 2Gi -
Install or upgrade existing installation.
-
Using Helm:
helm upgrade -i \ --kube-context $KUBE_CONTEXT \ -n $CONNECT_NS \ $RELEASE_NAME \ $CHART_ARCHIVE_PATH \ --values "$RELEASE_NAME/values.yaml" \ --values values.resource.yaml #<-- additional overrides --wait-for-jobs \ --wait \ --timeout=15mWhen applying custom value overrides using a values file ( -f/--values) or a single property override (--set), the last (rightmost) argument specified takes precedence. -
Using Helmfile:
Add the custom values file to the release entry in
helmfile.yaml.... - name: connect chart: ./Charts/connect-xxx.tgz version: x.x.x values: - ./connect/values.yaml - values.resources.yaml #<-- additional overrides ...When specifying value override files in the releases[].valueselement of a Helmfile, the files are applied in order. The last file specified takes precedence.The -f/--valuesand--setflags can also be passed to thehelmfile applycommand. They are applied to each release item, which can be useful for setting global values. For non-global value overrides, it is recommended to define them in thehelmfile.yamlfile.
-
4. Maintain GridOS Connect
4.1. Upgrade GridOS Connect
Each release is tagged with the current released version of this documentation: {version}.
To upgrade GridOS connect, you will need to do the following:
-
Fetch the latest Helm Deployment Template bundle.
-
Review the default chart values in the
values.yamlfiles provided in the Helm Deployment Template. -
Review any custom value overrides.
-
Run helmfile apply.
4.2. Resource Scaling
Assuming you still have the Helm Deployment Template bundle extracted on your local machine:
-
Review the resource allocations in the
values.yamlfiles provided for each chart. -
Define any overrides as described in the Chart Value Override Recommendations section.
-
Run helmfile apply.
5. Deploy GridOS Connect Flows
Use the SDK Deployer guide to
configure your deployment settings and remember that your management-api-root will be: https://api.YOUR_CONNECT_DOMAIN.
6. Service Management
6.1. IDP Configured Credentials
The following services will require IDP configured credentials:
-
Security Administration Tool:
https://service.YOUR_CONNECT_DOMAIN/secadmin/ -
Grafana:
https://admin.YOUR_CONNECT_DOMAIN/monitoring/grafana/ -
Kibana:
https://admin.YOUR_CONNECT_DOMAIN/monitoring/kibana/ -
GridOS Connect Console:
https://console.YOUR_CONNECT_DOMAIN/
6.2. Auto Generated Credentials
The following services have auto-generated credentials:
-
Zitadel Console
-
Username:
console-iamadmin@zitadel-main.local -
Password:
kubectl get secrets zitadel-console-admin -n foundation-cluster-zerotrust --template='{{.data.password | base64decode}}'
-
APISIX Dashboard
-
Username:
user -
Password:
kubectl get secrets apisix-dashboard-cred -n foundation-cluster-zerotrust --template='{{.data.password | base64decode}}'
-
MinIO Console
-
Username:
kubectl get secrets minio1-secret -n foundation-cluster-zerotrust --template='{{.data.accesskey | base64decode}}' -
Password:
kubectl get secrets minio1-secret -n foundation-cluster-zerotrust --template='{{.data.secretkey | base64decode}}'
6.3. Connect Service Log Retention
Elastic Search is used to store Connect service logs. These logs provide
Flow traces, an archive of integration executions. Log retention requires resources and for some use cases it may be beneficial to control when logs are purged. This can be configured using the
jobs.elasticsearch.index.delete.minAge in the connect/values.yaml file located in the Helm Deployment Template.
For example, to decrease the retention time to 90 days, provide the following values override snippet:
jobs:
# ...
elasticsearch:
index:
delete:
minAge: 90d
# ...
Place the snippet directly in connect/values.yaml (from the Helm Deployment Template) or in a separate file, and apply it as described in Chart Value Override Recommendations.
Run helmfile apply.
| Make sure you have read the Elastic Search documentation on lifecycle policy updates. |
7. Troubleshooting
7.1. Access
-
If you have auth problems when accessing services, especially if your session has timed out: Login to zitadel using the IDP configured credentials - then force-refresh your service web page.
-
If you have trouble accessing any services. Ensure you have all cookies and local storage deleted. Alternatively, use the incognito mode of your web browser.
7.2. Load Balancers and Reverse Proxies
No special/extra config is required for loadbalancers for Connect. As long as the loadbalancer points to the VMs port 443 and preserves the host in http requests (for application loadbalancers), and all required DNS names are added to point to the loadbalancer (zitadel, service, console, api), the load balancer will work.
7.3. Flow-traces/Logs
This issue will produce the following error message in the GridOS Connect Console Overview:
co.elastic.clients.elasticsearch._types.ElasticsearchException: [es/search] failed: [index_not_found_exception] no such index [flowserver-logs]
This is a known issue with Fluentbit Operator - after adding new custom resources (ClusterInput, ClusterFilter and ClusterOuput), Fluentbit is not able to fetch flowserver logs.
Assuming flowserver is running as expected and flows have been deployed, but flow traces are not visible in the "Flow traces" tab in GridOS Connect console, then it is safe to assume that fluentbit is not able to fetch logs for flowserver.
Workaround:
-
Restart fluentbit pods
kubectl delete po -n foundation-cluster-monitoring -l app.kubernetes.io/name=fluent-bit -
Restart flowserver pods
kubectl delete po -n foundation-env-default -l app.kubernetes.io/name=flowserver
7.4. Prevent Log Duplication of Connect Flow Server
The default Foundation Fluent-bit ClusterOutput configuration leads to log duplication across Elasticsearch indices. Each log event is stored multiple times, which increases storage usage and might impact query performance.
Specifically:
-
Each log event is indexed as a document in Elasticsearch.
-
Two copies is stored in the
logindex. -
One copy is stored in the
flowserver-logindex.
As a result, each log event is stored three times in total.
7.4.1. Solution
Since connect flowserver logs only needs to be retained in the flowserver-log index, they can be removed from the log index to reduce redundancy and optimize storage.
Update fluent-bit configuration Assuming the clusteroutput.fluentbit.fluent.io/elasticsearch resource has not been modified outside from default Foundation configuration.
-
Fetch elasticsearch ClusterOutput configuration
kubectl get clusteroutputs elasticsearch -o yaml -
Modify elasticsearch ClusterOutput the configuration
kubectl get clusteroutput elasticsearch -o yaml | yq e 'del(.metadata.annotations."kubectl.kubernetes.io/last-applied-configuration") | del(.metadata.generation) | del(.metadata.resourceVersion) | del(.metadata.uid) | del(.metadata.creationTimestamp)' - | yq e '.spec |= (select(has("match")) | .matchRegex = "^(?!flowserver.*).*" | del(.match))' -
Compare the outputs Compare and verify that that output from step 1. and 2. matches the table below. The match field should be replaced with matchRegex.
Original Modified apiVersion: fluentbit.fluent.io/v1alpha2 kind: ClusterOutput metadata: annotations: meta.helm.sh/release-name: monitoring-apps meta.helm.sh/release-namespace: foundation-cluster-monitoring labels: app.kubernetes.io/managed-by: Helm fluentbit.fluent.io/enabled: "true" name: elasticsearch spec: es: generateID: true host: monitoring-apps-es-client index: logs logstashFormat: false port: 9200 replaceDots: true suppressTypeName: "On" timeKey: '@timestamp' traceError: false traceOutput: false type: _doc match: '*' retry_limit: "False"apiVersion: fluentbit.fluent.io/v1alpha2 kind: ClusterOutput metadata: annotations: meta.helm.sh/release-name: monitoring-apps meta.helm.sh/release-namespace: foundation-cluster-monitoring labels: app.kubernetes.io/managed-by: Helm fluentbit.fluent.io/enabled: "true" name: elasticsearch spec: es: generateID: true host: monitoring-apps-es-client index: logs logstashFormat: false port: 9200 replaceDots: true suppressTypeName: "On" timeKey: '@timestamp' traceError: false traceOutput: false type: _doc matchRegex: ^(?!flowserver.*).* retry_limit: "False"If your output is consistent with the table above. Use the following commands below to exclude flowserver logs in the "elasticsearch" ClusterOutput.
-
Create a YAML file for the modified ClusterOutput resource
kubectl get clusteroutput elasticsearch -o yaml | yq e 'del(.metadata.annotations."kubectl.kubernetes.io/last-applied-configuration") | del(.metadata.generation) | del(.metadata.resourceVersion) | del(.metadata.uid) | del(.metadata.creationTimestamp)' - | yq e '.spec |= (select(has("match")) | .matchRegex = "^(?!flowserver.*).*" | del(.match))' > modified-clusteroutput-es.yaml -
Delete the old ClusterOutput resource
kubectl delete clusteroutput elasticsearch -
Apply the modified cluster output resource.
kubectl apply -f modified-clusteroutput-es.yaml -
Restart fluent-bit pods
kubectl delete po -n foundation-cluster-monitoring -l app.kubernetes.io/name=fluent-bit
7.5. Connect-PostgreSQL Fails to Install
If you are installing connect on a Foundation version lower than 25R01, the connect-postgresql Helm chart will fail with the following error:
Error: UPGRADE FAILED: cannot patch "connect-postgresql" with kind postgresql: postgresql.acid.zalan.do "connect-postgresql" is invalid: spec.postgresql.version: Unsupported value: "17": supported values: "10", "11", "12", "13", "14", "15"
7.5.1. Resolution Options
Upgrade Foundation to 25R01+
Upgrade your environment to Foundation version 25R01 or higher and then run apply connect helmfile to redeploy the GridOS connect-postgresql with the latest supported version.
Downgrade connect-postgresql chart
Edit helmfile.yaml in the root directory of the Helm Deployment Template.
Ensure that the connect-postgresql chart version is set to 0.5.0, and run apply connect helmfile to redeploy the GridOS connect-postgresql.
...
- name: connect-postgresql
<<: *defaults
chart: connect-helm/connect-postgresql
version: 0.5.0
values:
- ./connect-postgresql/values.yaml
# - ./offline-image-overrides.yaml
...
7.6. connect-identityreconciler Pod Fails to Start Due to Renamed APISIX Secret
Connect release 1.20.0 includes updates to support changes in APISIX secrets introduced in Foundation version 25r09.
Symptom:
The connect-identityreconciler pod may fail to start and log one of the following errors:
-
When deploying Connect from scratch:
`MountVolume.SetUp failed for volume "secrets": secret "connect-identityreconciler-apisix-api-token" not found`
-
When upgrading Connect:
`Getting upstream 'connect-flowserver-service' failed with status: 401 Unauthorized`
Root Cause:
In Foundation version 25r09, an APISIX secret was renamed. Connect relies on this secret, so if Connect and Foundation are on mismatched versions, the secret name does not align. This can cause the pod to fail at startup or result in service requests being rejected.
7.6.1. Compatibility Overview
The following matrix shows the required actions based on the versions of Foundation and Connect:
| Foundation version | Connect version | Action |
|---|---|---|
|
|
No action required; the setup is fully compatible. |
Older than |
|
Apply the override described in Resolution. |
|
Older than |
Apply the override described in Resolution. |
7.6.2. Resolution
In connect/values.yaml, add the following override under the identityreconciler block:
Foundation version older than |
Foundation version |
|
|
8. Uninstall GridOS Connect with Helm
This section describes how to uninstall GridOS Connect and its dependent components using Helm. You may need to uninstall the Connect components when performing a full redeployment to ensure a clean state.
| Uninstalling Connect will remove all associated resources (databases, metrics, and openbao data if selected). Make sure you have valid backups before proceeding. This action is irreversible. |
8.1. Uninstall Connect
Run the following command to uninstall the main Connect service:
helm uninstall -n foundation-env-default connect
8.2. Uninstall Connect PostgreSQL
If your deployment uses the bundled PostgreSQL instance, uninstall it as follows:
helm uninstall -n foundation-env-default connect-postgresql
| This command deletes all Connect databases. Make sure you have created backups if you want to restore your data later. |
8.3. Uninstall Connect Victoria Metrics
If your deployment uses VictoriaMetrics, uninstall it with:
helm uninstall -n foundation-env-default connect-victoria-metrics
kubectl delete -n foundation-env-default secret/connect-vm-auth-config pvc/server-volume-victoria-metrics-server-0
| Deleting the persistent volume claim will permanently remove VictoriaMetrics data. Make sure you have created backups if you want to restore your data later. |
8.4. Uninstall Connect Vault
If your deployment uses Vault for secret management, uninstall it with:
helm uninstall -n foundation-env-default vault
kubectl delete -n foundation-env-default secret/vault-unseal-login pvc/data-vault-{0..2}
| Deleting the persistent volume claim will permanently remove Vault data. Make sure you have created backups if you want to restore your data later. |
8.5. Uninstall Connect OpenBao
If your deployment uses OpenBao for secret management, uninstall it with:
helm uninstall -n foundation-env-default connect-openbao
kubectl delete -n foundation-env-default secret/connect-openbao-unseal-login pvc/data-connect-openbao-{0..2} pvc audit-connect-openbao-{0..2}
| Deleting the persistent volume claim will permanently remove OpenBao data. Make sure you have created backups if you want to restore your data later. |