Deploying a Flask application
The process of deploying a Flask application (website and REST APIs) on Data Science & AI Workbench involves the following:
- Configuring Flask to run behind a proxy
- Enabling Anaconda Project HTTP command-line arguments
- Running Flask on the deployed host and port
Here is a small Flask application that includes the call to .run()
. The file is saved to server.py
.
This Flask application was written using Blueprints, which is useful for separating components when working with a large Flask application.
Here, the nested block in if __name__ == '__main__'
could be in a separate
file from the 'hello'
Blueprint.
Running behind an HTTPS proxy
Workbench maintains all HTTPS connections into and out of the server and deployed instances. When writing a Flask app, you only need to inform it that will be accessed from behind the proxy provided by Workbench.
The simplest way to do this is with the ProxyFix
function from werkzeug
.
More information about proxies is provided here.
Enabling command-line arguments
In your anaconda-project.yml
file, you define a deployable command as follows:
The flag supports_http_options
means that server.py
is expected to act on the following command line arguments defined in the Anaconda Project Reference.
This is easily accomplished by adding the following argparse
code before calling app.run()
in server.py
Running your Flask application
The final step is to configure the Flask application with the
Anaconda Project HTTP values and call app.run()
. Note that
registering the Blueprint provides a convenient way to deploy
your application without having to rewrite the routes.
Here is the complete code for the Hello World application.
Was this page helpful?