Connecting to external Postgres
Anaconda recommends that you connect to your external Postgres instance immediately after installation completes. If you do not, you will need to create a dump file from the internal Workbench Postgres instance, then import it into your own Postgres instance, ensuring that it mirrors the internal setup.
You can connect Workbench to your organization’s pre-existing Postgres instance for both BYOK8s and Gravity installations. Connecting to an external Postgres instance enables you to take advantage of multiple High Availability/Disaster Recovery (HA/DR) options your organization may already have in place.
Workbench also supports connections to external Postgres instances that are hosted within the same overarching cluster environment. For example, your organization can have a multi-namespace/region deployment of Postgres within your Kubernetes cluster that connects to Workbench via the Service name. Combining this with dynamic block Persistent Storage and Managed Persistence enables Workbench to rapidly relocate deployed projects from failing nodes to healthy ones utilizing standard Kubernetes pod scheduling.
Connecting to external Postgres
To connect to your external Postgres instance:
-
Create a Workbench account with read/write access and set the account’s password.
-
Open your
anaconda-enterprise-anaconda-platform.yml
file and edit the following section: -
Save your changes, then restart all pods:
-
Once the pods are in a running state, confirm that the following databases have been created in Postgres.
If a pod shows a crashloop state, inspect the pod’s logs. The most common cause of a crashloop state is a misconfiguration.
If the
anaconda_auth
database is not created in Postgres, you must update the following environmental variables in theanaconda-enterprise-ap-auth
deployment:
Was this page helpful?