Anaconda recommends that you connect to your external Postgres instance immediately after installation completes. If you do not, you will need to create a dump file from the internal Workbench Postgres instance, then import it into your own Postgres instance, ensuring that it mirrors the internal setup.
You can connect Workbench to your organization’s pre-existing Postgres instance for both BYOK8s and Gravity installations. Connecting to an external Postgres instance enables you to take advantage of multiple High Availability/Disaster Recovery (HA/DR) options your organization may already have in place.Workbench also supports connections to external Postgres instances that are hosted within the same overarching cluster environment. For example, your organization can have a multi-namespace/region deployment of Postgres within your Kubernetes cluster that connects to Workbench via the Service name. Combining this with dynamic block Persistent Storage and Managed Persistence enables Workbench to rapidly relocate deployed projects from failing nodes to healthy ones utilizing standard Kubernetes pod scheduling.
If a pod shows a crashloop state, inspect the pod’s logs. The most common cause of a crashloop state is a misconfiguration.
If the anaconda_auth database is not created in Postgres, you must update the following environmental variables in the anaconda-enterprise-ap-auth deployment:
Copy
Ask AI
# Replace <POSTGRES> with the hostname or service name of the Postgres instance# Replace <POSTGRES-ACCOUNT> with the user account name for your Postgres instance# Replace <PASSWORD> with the user account password for your Postgres instanceenv: - name: WAIT_FOR_IT value: '<POSTGRES>:5432' - name: DB_USER value: <POSTGRES-ACCOUNT> - name: DB_PASSWORD value: <PASSWORD> - name: DB_ADDR value: <POSTGRES>