Building an Interactive ML Dashboard in Panel
Sophia Yang
By: Andrew Huang, Sophia Yang, Philipp Rudiger
HoloViz Panel is a versatile Python library that empowers developers and data scientists to build interactive visualizations with ease. Whether you’re working on machine learning projects, developing web applications, or designing data dashboards, Panel provides a powerful set of tools and features to enhance your data exploration and presentation capabilities. In this blog post, we will delve into the exciting features of HoloViz Panel, explore how it can revolutionize your data visualization workflows, and demonstrate how you can make an app like this using about 100 lines of code.
Try out the app and check out the code:
Harnessing the Power of ML/AI
ML/AI has become an integral part of data analysis and decision-making processes. With Panel, you can seamlessly integrate ML models and results into your visualizations. In this blog post, we will explore how to make an image classification task using the OpenAI CLIP model.
CLIP is pretrained on a large dataset of image-text pairs, enabling it to understand images and corresponding textual descriptions and work for various downstream tasks such as image classification.
There are two ML-related functions we used to perform the image classification task. The first function load_processor_model
enables us to load a pre-trained CLIP model from Hugging Face. The second function get_similarity_score
calculates the degree of similarity between the image and a provided list of class labels.
@pn.cache def load_processor_model( processor_name: str, model_name: str ) -> Tuple[CLIPProcessor, CLIPModel]: processor = CLIPProcessor.from_pretrained(processor_name) model = CLIPModel.from_pretrained(model_name) return processor, model def get_similarity_scores(class_items: List[str], image: Image) -> List[float]: processor, model = load_processor_model( "openai/clip-vit-base-patch32", "openai/clip-vit-base-patch32" ) inputs = processor( text=class_items, images=[image], return_tensors="pt", # pytorch tensors ) outputs = model(**inputs) logits_per_image = outputs.logits_per_image class_likelihoods = logits_per_image.softmax(dim=1).detach().numpy() return class_likelihoods[0]
Binding Widgets for Interactivity
One of the key strengths of Panel is its ability to bind widgets to functions. This functionality provides an intuitive interface for users to manipulate the underlying data and gain deeper insights through interaction.
Python Function
In our example, we have a process_input
function, which formats the similarity score we get from the image classification model to a Panel object with a good-looking UI. The actual function utilizes async; if you’re unfamiliar with async, don’t worry! We will explain it in a later section, but note async is not a requirement of using Panel–Panel simply supports it!
async def process_inputs(class_names: List[str], image_url: str): """ High level function that takes in the user inputs and returns the classification results as panel objects. """ ... yield results
Panel Widgets
There are two widgets that we use to interact with this function.
image_url
is a TextInput widget, which allows entering any string as the image URL.class_names
is another TextInput widget, which accepts possible class names for the model to classify.
image_url = pn.widgets.TextInput( name="Image URL to classify", value=pn.bind(random_url, randomize_url), ) class_names = pn.widgets.TextInput( name="Comma separated class names", placeholder="Enter possible class names, e.g. cat, dog", value="cat, dog, parrot", )
Binding Widgets to Function
Based on the process_inputs
function signature, it accepts two parameters: class_names
and image_url
. We can bind each arg/kwarg to a widget using pn.bind
like this:
interactive_result = pn.panel( pn.bind(process_inputs, image_url=image_url, class_names=class_names), height=600, )
- The first positional argument is the function name.
- The keyword arguments after match the function’s signature, and thus the widgets’ values are bound to the function’s keyword arguments.
To clarify, if the widget was named image_url_input
instead of image_url
, then the call would be:
pn.bind(process_inputs, image_url=image_url_input, ...)
Adding Template Design Styling
The aesthetics of your applications and dashboards play a critical role in engaging your audience. Panel enables you to add styling based off popular designs like Material or Fast to your visualizations, allowing you to create visually appealing and professional-looking interfaces.
In this example, we used a bootstrap
template, where we can control what we’d like to show in multiple areas such as title
and main
, and we can specify sizes and colors for various components:
pn.extension(design="bootstrap", sizing_mode="stretch_width")
We also set the Progress
bar design to Material
.
row_bar = pn.indicators.Progress( ... design=pn.theme.Material, )
Note, you can use styles
and stylesheets
too!
Caching for Expensive Tasks
Some data processing tasks can be computationally expensive, causing sluggish performance. Panel offers caching mechanisms that allow you to store the results of expensive computations and reuse them when needed, significantly improving the responsiveness of your applications.
In our example, we cached the output of the load_processor_model
using the pn.cache
decorator. This means that we don’t need to download and load the model multiple times. This step will make your app feel much more responsive!
Additional note: for further responsiveness, there’s defer_loading and loading indicators.
@pn.cache def load_processor_model( processor_name: str, model_name: str ) -> Tuple[CLIPProcessor, CLIPModel]: processor = CLIPProcessor.from_pretrained(processor_name) model = CLIPModel.from_pretrained(model_name) return processor, model
Bridging Functionality with JavaScript
While Panel provides a rich set of interactive features, you may occasionally require additional functionality that can be achieved through JavaScript. It’s easy to integrate JavaScript code with Panel visualizations to extend their capabilities. By bridging the gap between Python and JavaScript, you can create advanced visualizations and add interactive elements that go beyond the scope of Panel’s native functionality.
At the bottom of our app, you might have observed a collection of icons representing Panel’s social media accounts, including LinkedIn and Twitter. When you click on any of these icons, you will be automatically redirected to the respective social media profiles. This seamless click and redirect functionality is made possible through Panel’s JavaScript integration with the js_on_click
method:
footer_row = pn.Row(pn.Spacer(), align="center") for icon, url in ICON_URLS.items(): href_button = pn.widgets.Button(icon=icon, width=35, height=35) href_button.js_on_click(code=f"window.open('{url}')") footer_row.append(href_button) footer_row.append(pn.Spacer())
Understanding Sync vs. Async Support
Asynchronous programming has gained popularity due to its ability to handle concurrent tasks efficiently. We’ll discuss the differences between synchronous and asynchronous execution and explore Panel’s support for asynchronous operations. Understanding these concepts will enable you to leverage async capabilities within Panel, providing enhanced performance and responsiveness in your applications.
Using async
to your function allows collaborative multitasking within a single thread and allows IO tasks to happen in the background. For example, when we fetch a random image to the internet, we don’t know how long we’d need to wait and we don’t want to stop our program while waiting. Async enables concurrent execution, allowing us to perform other tasks while waiting and ensuring a responsive application. Be sure to add the corresponding awaits too.
async def open_image_url(image_url: str) -> Image: async with aiohttp.ClientSession() as session: async with session.get(image_url) as resp: return Image.open(io.BytesIO(await resp.read()))
If you are unfamiliar with async, it’s also possible to rewrite this in sync too! async is not a requirement of using Panel!
def open_image_url(image_url: str) -> Image: with requests.get(image_url) as resp: return Image.open(io.BytesIO(resp.read()))
Other Ideas to Try
Here we only explored one idea; there’s so much more you can try:
- Interactive Text Generation: Utilize Hugging Face’s powerful language models, such as GPT or Transformer, to generate interactive text. Combine Panel’s widget binding capabilities with Hugging Face models to create dynamic interfaces where users can input prompts or tweak parameters to generate custom text outputs.
- Sentiment Analysis and Text Classification: Build interactive dashboards using Hugging Face’s pre-trained sentiment analysis or text classification models. With Panel, users can input text samples, visualize predicted sentiment or class probabilities, and explore model predictions through interactive visualizations.
- Language Translation: Leverage Hugging Face’s translation models to create interactive language translation interfaces. With Panel, users can input text in one language and visualize the translated output, allowing for easy experimentation and exploration of translation quality.
- Named Entity Recognition (NER): Combine Hugging Face’s NER models with Panel to build interactive NER visualizations. Users can input text and visualize identified entities, highlight entity spans, and explore model predictions through an intuitive interface.
- Chatbots and Conversational AI: With Hugging Face’s conversational models, you can create interactive chatbots or conversational agents. Panel enables users to have interactive conversations with the chatbot, visualize responses, and customize the chatbot’s behavior through interactive widgets.
- Model Fine-tuning and Evaluation: Use Panel to create interactive interfaces for fine-tuning and evaluating Hugging Face models. Users can input custom training data, adjust hyperparameters, visualize training progress, and evaluate model performance through interactive visualizations.
- Model Comparison and Benchmarking: Build interactive interfaces with Panel to compare and benchmark different Hugging Face models for specific NLP tasks. Users can input sample inputs, compare model predictions, visualize performance metrics, and explore trade-offs between different models.
Check out our app gallery for other ideas! Happy experimenting!
Join Our Community
The Panel community is vibrant and supportive, with experienced developers and data scientists eager to help and share their knowledge. Join us and connect with us:
Talk to an Expert
Talk to one of our experts to find solutions for your AI journey.