Determining the resource requirements for a Kubernetes cluster depends on a number of different factors, including what type of applications you are going to be running, the number of users that are active at once, and the workloads you will be managing within the cluster. Data Science & AI Workbench’s performance is tightly coupled with the health of your Kubernetes stack, so it is important to allocate enough resources to manage your users workloads.

Anaconda’s hardware recommendations ensure a reliable and performant Kubernetes cluster. However, most requirements are likely to be superseded by the requirements imposed by your existing Kubernetes cluster, whether that is an on-premise cluster that is configured to support multiple tenants or a cloud offering.

To install Workbench successfully, your systems must meet or exceed the requirements listed below. Anaconda has created a pre-installation checklist to help prepare you for installation. The checklist helps you verify that your cluster is ready to install Workbench, and that the necessary resources are reserved. Anaconda’s Implementation team will review the checklist with you prior to your installation.

Supported Kubernetes versions

Workbench is compatible with Kubernetes API versions 1.15-1.28. If your version of Kubernetes utilizes the API at these versions, you can install Workbench!

Workbench has been successfully installed on the following Kubernetes variants:

  • Vanilla Kubernetes
  • VMWare Tanzu
  • RedHat OpenShift
  • Google Anthos

Aside from the basic requirements listed on this page, Anaconda also offers environment-specific recommendations for you to consider.

Administration server

Installation requires a machine with direct access to the target Kubernetes cluster and Docker registry. Anaconda refers to this machine as the Administration Server. Anaconda recommends that you identify a machine to be your Administration Server that will remain available for ongoing management of the application once installed. It is useful for this server to be able to mount the storage volumes as well.

The following software must be installed on the Administration Server:

  • Helm version 3.2+
  • The Kubernetes CLI tool - kubectl
  • (OpenShift only) The OpenShift oc CLI tool
  • (Optional) The watch command line tool
  • (Optional) The jq command line tool

CPU, memory, and nodes

  • Minimum node size: 8 CPU cores, 32GB RAM
  • Recommended node size: 16 CPU cores, 64GB RAM (or more)
  • Recommended oversubscription (limits/requests ratio) 4:1
  • Minimum number of worker nodes: 3

Minimally sized nodes should be reserved for test environments with low user counts and small workloads. Any development or production environment should meet or exceed the recommended requirements.

Workbench utilizes node labels and taints and tolerations to ensure that workloads run on the appropriate nodes. Anaconda recommends identifying any necessary affinity (which nodes a workload runs on based on its applied labels) or toleration settings prior to installation.

Resource profiles

Resource profiles available to platform users for their sessions and deployments are created by the cluster administrator. Each resource profile can be customized for the amount of CPU, memory, and (optionally) GPU resources available. Anaconda recommends determining what resource profiles you will require prior to installation. For more information, see Configuring workload resource profiles.

Namespace, service account, RBAC

Workbench should be installed in a namespace that is not occupied by any other applications, including other instances of Workbench. Create a service account for the namespace with sufficient permissions to complete the helm installation and enable the dynamic resource provisioning Workbench performs during normal operation.

Workbench requires more permissions than would normally be given to an application that only requires read-only access to the Kubernetes API. However, with the exception of the ingress controller, all necessary permission grants are limited to the application namespace. Please speak with the Anaconda Implementation team about any questions you may have regarding these permissions.

If you want to use the Anaconda-supplied ingress, it is also necessary to grant a small number of additional, cluster-wide permissions. This is because the ingress controller expects to be able to monitor ingress-related resources across all namespaces.

If you want to use the Kubernetes Dashboard and resource monitoring services included with Workbench, you must include additional permissions for each service.

You must establish permissions for both Prometheus and kube-state-metrics to utilize Worbench’s resource monitoring features.

Please review these RBAC configurations with your Kubernetes administrator. While it is possible to further reduce these scopes, doing so is likely to prevent normal operation of Workbench.

Security

Preparing your Kubernetes cluster to install Workbench involves configuring your environment in a way that both supports the functionality of Workbench and adheres to security best practices.

Workbench containers can be run using any fixed, non-zero UID, making the application compatible with an OpenShift Container Platform (OCP) restricted SCC, or an equivalent non-permissive Kubernetes security context. This reduces the risk to your systems if the container is compromised.

However, in order to enable the Authenticated Network File System (NFS) capability, allowing user containers to access external, authenticated fileshares (storage servers), user pods must be permitted to run as root (UID 0).

This configuration runs containers in a privileged state to determine and assign the authenticated group memberships of the user running the container only. Once authentication is complete, the container drops down to a non-privileged state for all further execution.

Please speak with your Anaconda Implementation team for more information, and to see if it is possible for your application to avoid this requirement.

Storage

A standard installation of Workbench requires two Persistent Volume Claims (PVCs) to be statically provisioned and bound prior to installation. Anaconda strongly recommends a premium performance tier for provisioning your PVCs if the option is available. Expand the following sections for more information about the necessary volumes:

It is possible to combine these into a single PersistentVolumeClaim to cover both needs, as long as that single volume simultaneously meets the performance needs demanded by both.

The root directories of these storage volumes must be writable by Workbench containers. This can be accomplished by configuring the volumes to be group writable by a single numeric GroupID (GID). Anaconda strongly recommends that this GID be 0. This is the default GID assigned to Kubernetes containers. If this is not possible, supply the GID within the Persistent Volume specification as a pv.beta.kubernetes.io/gid annotation.

To ensure that the data on these volumes is not lost if Workbench is uninstalled, do not change the ReclaimPolicy from its default value of Retain.

Ingress and firewall

Workbench is compatible with most ingress controllers that are commonly used with Kubernetes clusters. Because ingress controllers are a cluster-wide resource, Anaconda recommends that the controller be installed and configured prior to installing Workbench. For example, if your Kubernetes version falls within 1.19-1.26, any ingress controller with full support for the networking.k8s.io/v1 ingress API enables Workbench to build endpoints for user sessions and deployments.

If your cluster is fully dedicated to Workbench, you can configure the Helm chart to install a version of the NGINX Ingress controller, which is compatible with multiple variants of Kubernetes, including OpenShift. Anaconda’s only modification to the stock NGINX container enables it to run without root privileges.

Your cluster configuration and firewall settings must allow all TCP traffic between nodes, particularly HTTP, HTTPS, and the standard Postgres ports.

Healthy clusters can block inter-node communication, which disrupts the pods that Workbench requires to provision user workloads.

External traffic to Workbench will be funneled entirely through the ingress controller, through the standard HTTPS port 443.

DNS/SSL

Workbench requires the following:

  • A valid, fully qualified domain name (FQDN) reserved for Workbench.

  • A DNS record for the FQDN, as well as a wildcard DNS record for its subdomains.

    Both records must point to the IP address allocated by the ingress controller. If you are using an existing ingress controller, you may be able to obtain this address prior to installation. Otherwise, you must populate the DNS records with the address after the initial installation is complete.

  • A valid wildcard SSL certificate covering the cluster FQDN and its subdomains. Installation requires both the public certificate and the private key.

    If the certificate chain includes an intermediate certificate, the public certificate for the intermediate is required. The scope of the wildcard only requires *.anaconda.company.com to be covered, ensuring that all subdomains under this specific domain are included in the SSL certificate and DNS configuration.

  • The public root certificate, if the above certificates were created with a private Certificate Authority (CA).

    Wildcard DNS records and SSL certificates are required for correct operation of Workbench. If you have any objections to this requirement, speak with your Anaconda Implementation team.

Docker images

Anaconda strongly recommends that you copy the Workbench Docker images from our authenticated source repository on Docker Hub into your internal docker registry. This ensures their availability even if there is an interruption in connectivity to Docker Hub. This registry must be accessible from the Kubernetes cluster where Workbench is installed.

In an air-gapped setting, this is required.

Docker images used by Workbench are larger than many Kubernetes administrators are accustomed to. For more background, see Docker image sizes.

GPU Information

This release of Workbench supports up to Compute Unified Device Architecture (CUDA) 11.6 DataCenter drivers.

Anaconda has directly tested the application with the following GPU cards:

  • Tesla V100 (recommended)
  • Tesla P100 (adequate)

Theoretically, Workbench will work with any GPU card compatible with the CUDA drivers, as long as they are properly installed. Other cards supported by CUDA 11.6:

  • A-Series: NVIDIA A100, NVIDIA A40, NVIDIA A30, NVIDIA A10
  • RTX-Series: RTX 8000, RTX 6000, NVIDIA RTX A6000, NVIDIA RTX A5000, NVIDIA RTX A4000, NVIDIA T1000, NVIDIA T600, NVIDIA T400
  • HGX-Series: HGX A100, HGX-2
  • T-Series: Tesla T4
  • P-Series: Tesla P40, Tesla P6, Tesla P4
  • K-Series: Tesla K80, Tesla K520, Tesla K40c, Tesla K40m, Tesla K40s, Tesla K40st, Tesla K40t, Tesla K20Xm, Tesla K20m, Tesla K20s, Tesla K20c, Tesla K10, Tesla K8
  • M-Class: M60, M40 24GB, M40, M6, M4

Support for GPUs in Kubernetes is still a work in progress, and each cloud vendor provides different recommendations. For more information about GPUs, see Understanding GPUs.

Helm charts

Helm is a tool used by Workbench to streamline the creation, packaging, configuration, and deployment of the application’s configurations. It combines all of the config map objects into a single reusable package called a Helm chart. This chart contains all the necessary resources to deploy the application within your cluster. These resources include .yaml configuration files, services, secrets, and config maps.

Pre-installation checklist

Anaconda has created this pre-installation checklist to help you verify that you have properly prepared your environment prior to installation.

Within this checklist, Anaconda provides some commands or command templates for you to run in order to verify a given requirement, along with a typical output from the command to give you an idea of the kind of information you should see. Run each of these commands, (modified as appropriate for your environment) and copy the outputs into a document. Send this document to your Anaconda implementation team so that they can verify your environment is ready before you begin the installation process.