This page details some common issues and their respective workarounds. For Anaconda installation or technical support options, visit our support offerings page.

403 error

Problem

A 403 error is a generic Forbidden error issued by a web server in the event the client is forbidden from accessing a resource.

The 403 error you are receiving may look like the following:

* Collecting package metadata (current_repodata.json): failed
* UnavailableInvalidChannel: The channel is not accessible or is invalid.
        * channel name: pkgs/main
        * channel url: https://repo.anaconda.com/pkgs/main
        * error code: 403

        * You will need to adjust your conda configuration to proceed.
        * Use `conda config --show channels` to view your configuration's current state, and use `conda config --show-sources` to view config file locations.

There are several reasons a 403 error could be received:

  • The user has misconfigured their channels in their configuration (for example, the secure location where the token is stored was accidentally deleted (most common))
  • A firewall or other security device or system is preventing user access (second most common)
  • We are blocking their access because of a potential terms of service violation (third most common)

Solution

  1. First, run the following to undo your current configurations:

    conda config --remove-key default_channels
    
  2. Next, install or upgrade the conda-token tool:

    conda install --freeze-installed conda-token
    
  3. Lastly, re-apply the token and configuration settings:

    # Replace <TOKEN> with your token
    conda token set <TOKEN>
    

If this doesn’t resolve the issue, Anaconda recommends consulting our Terms of Service error page.

Conda: Channel is unavailable/missing or package itself is missing

Problem

Configuring your .condarc might cause you to be unable to install packages. You might receive an error message that the channel or package is unavailable or missing.

Solution

One potential fix for all of these is to run the following command:

conda clean --index-cache

This will clear the “index cache” and force conda to sync metadata from the repo server.

If you experience any trouble regarding SSL errors, confirm that you are adhering to the guidance in this section.

Moving CA certs to the docker-host

Anaconda recommends mounting the cacert.pem file to the docker-host and concatenating the required root CAs to that file on the docker-host, rather than copying the file to the containers.

  1. First, place a cacert.pem file in ${BASE_INSTALL_DIR}/config/cacert.pem. You can obtain this file from any conda environment with certifi package installed ($<CONDA_PREFIX>/ssl/cacert.pem) or from one of the Package Security Manager docker containers:

    # Replace {BASE_INSTALL_DIR} with your base install directory.
    docker cp $(docker ps | awk '/repo_api/ {print $1}'):/conda/ssl/cacert.pem  ${BASE_INSTALL_DIR}/config/cacert.pem
    
  2. Concatenate the necessary root certs in pem format:

    # Replace <YOUR_CERT> with your cert and {BASE_INSTALL_DIR} with your base install directory.
    cat <YOUR_CERT> >> ${BASE_INSTALL_DIR}/config/cacert.pem
    
  3. You now have two options for updating the Package Security Manager application to use the docker-host cacert.pem:

  • Edit the docker-compose.yml file and add volume mounts to nginx_proxy, repo_api, repo_worker and repo_dipacher, or
  • Use a separate .yml file (shown below) and let docker-compose merge it with the stock one when starting up:
cat << EOF > custom-cas.yml
services:
   nginx_proxy:
         volumes:
         - \${BASE_INSTALL_DIR}/config/cacert.pem:/conda/ssl/cacert.pem
   repo_api:
         volumes:
         - \${BASE_INSTALL_DIR}/config/cacert.pem:/conda/ssl/cacert.pem
   repo_worker:
         volumes:
         - \${BASE_INSTALL_DIR}/config/cacert.pem:/conda/ssl/cacert.pem
   repo_dispatcher:
         volumes:
         - \${BASE_INSTALL_DIR}/config/cacert.pem:/conda/ssl/cacert.pem
EOF
docker compose -f docker-compose.yml -f custom-cas.yml up -d

Storing user credentials

If the proxy connection requires credentials, Anaconda recommends storing the credentials in the .env file (located in the same folder as the docker-compose.yml file) and referencing it in docker_compose.yml so that docker-compose.yml is readable for a broader set of users.

When Package Security Manager is installed, an .env file is created alongside the docker-compose.yml file by default. You can edit this file and add variables to be referenced in the docker-compose.yml file as follows:

# example .env
BASE_INSTALL_DIR=/opt/anaconda/repo
DOCKER_REGISTRY=
DOMAIN=example.com
NGINX_PROXY_PORT=443
POSTGRES_HOST=postgres
POSTGRES_URI=postgresql://postgres:postgres@postgres/postgres
REDIS_ADDR=redis://redis:6379/0
VERSION=6.1.4
PROTOCOL=https
REPO_TOKEN_CLIENT_SECRET=something
REPO_KEYCLOAK_SYNC_CLIENT_SECRET=somethingsomething
DOCKER_UID=0
DOCKER_GID=0

# Added for example..
PROXY_USER=eden
PROXY_PW=eden-password

Then, merge the following with existing compose content in the docker-compose.yml file. Notice the environment variable from .env being referenced in the environment variables:

repo_worker:
  environment:
    - HTTP_PROXY=http://<PROXY_USER>:<PROXY_PW>@proxypy:8899
    - HTTPS_PROXY=http://<PROXY_USER>:<PROXY_PW>@proxypy:8899
repo_api:
  environment:
    - HTTP_PROXY=http://<PROXY_USER>:<PROXY_PW>@proxypy:8899
    - HTTPS_PROXY=http://<PROXY_USER>:<PROXY_PW>@proxypy:8899

Local mirrors

Another option is mirroring locally. You can do this by getting packages into an admin-managed channel—say, anaconda/main—and then mirroring filtered packages from that channel to other channels. This will make Package Security Manager the source for the mirror. As such, Package Security Manager needs to be able to validate its own certificate, which in most cases won’t work (in very much the same way PROXY with terminating SSL will not work).

For this to work, then, you will most likely need to update the cacert.pem file of all Package Security Manager containers. For this reason, Anaconda recommends hosting the cacert.pem file on the docker-host instead of the containers.

SSL verification error

Cause

You may receive an SSL verification error if you have SSL enabled with a self-signed certificate.

Solution

Run the following command to disable SSL certificate check:

conda repo config --set ssl_verify False

Run the following to verify the above command worked:

conda repo config --show

OAuth2 with self-signed certificates

Cause

Even if you have ssl_verify set to false while using self-signed certificates and SAML, you may still run into SSL verification errors.

You may receive an error similar to the following:

ConnectionError: HTTPSConnectionPool(host='my_server_endpoint', port=443): Max retries exceeded with url: /endpoint (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb73dc3b3d0>: Failed to establish a new connection: [Errno 110] Connection timed out',))

Solution

Set an environment variable called a REQUESTS_CA_BUNDLE.

For Windows, run the following:

SET REQUESTS_CA_BUNDLE=<PATH>
conda repo login

For Unix, run the following:

export REQUESTS_CA_BUNDLE=<PATH>
conda repo login

HTTP 000 CONNECTION FAILED

If you receive this error message, run the following command:

conda config --set ssl_verify false

Using Redis

Cause

By default, Redis does not require a password. Not enabling a password requirement leaves your instance of Package Security Manager vulnerable.

Solution

Follow these steps to password protect your instance:

  1. In the installation directory, update config/nginx/conf.d/repo.conf to include the add_header directive somewhere in the server block:

    server {
        ...
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
        ...
    }
    
  2. Create a directory called redis in the configs directory:

    mkdir -p config/redis
    
  3. Create a file called redis.conf inside config/redis with the following contents:

    requirepass "mypassword"
    
  4. Update the docker-compose.yml file of Package Security Manager installation to mount this custom redis config:

    redis:
        image: ${DOCKER_REGISTRY}redis-ubi:${VERSION}
        restart: always
        volumes:
        - ${BASE_INSTALL_DIR}/config/redis/redis.conf:/usr/local/etc/redis/redis.conf
        command:
        - /usr/local/etc/redis/redis.conf
    
    #Alternative:
    #  redis:
    #    image: ${DOCKER_REGISTRY}redis-ubi8:${VERSION}
    #    restart: always
    #    ports:
    #    - 6379:6379
    #    command:
    #    - /usr/local/bin/redis-server
    #    - --requirepass mypassword
    # If you use this alternative configuration, there's no need to create a config/redis/redis.conf file to mount in.
    
  5. Update REDIS_ADDR variable in .env file to include password:

    ...
    REDIS_ADDR=redis://:mypassword@redis:6379/0
    ...
    
  6. Restart docker-compose services so changes are picked up. You can do this using:

    docker compose up --detach
    

Error message on initial login to Package Security Manager

If you receive an error on your initial login to Package Security Manager and are unable to set your license, it is likely that you have forgotten to include a default user during your installation command, and your current user lacks the administrative permissions necessary to register your license.

To correct this issue:

  1. Log in to the Keycloak administrative console.
  2. Navigate to the dev realm.
  3. Select Users from the left-hand navigation.
  4. Create a user or select an existing user from the list to view their profile page.
  5. Select the Role Mapping tab.
  6. Click Assign role.
  7. Select the admin role, then click Assign.
  8. Return to Package Security Manager and log in as this admin user.
  9. Set your license.

Backup fails with exit status 1

If you attempt to run the backup command and receive an error in dumping the repo database with exit status 1 returned by the command, complete the following procedure:

  1. Connect to your instance of Package Security Manager. If necessary, get help from your IT department with this step.

  2. Open your installation directory by running the following command:

    # Replace <INSTALLER_DIRECTORY> with the location of your file directory
    cd <INSTALLER_DIRECTORY>
    
  3. Open your docker-compose.yml file using your preferred file editor.

  4. Find the postgres: section of the file.

  5. Replace the following portion of the postgres: configurations:

    expose:
        - "5432"
    

    With this:

    ports:
      - "5432:5432"
    

    Here is an example of what your file should look like when you are finished:

  6. Retry the backup command.

Was this page helpful?