Deploying MLOps Platform to Enable End-to-End ML Workflows
Deploying the Kubeflow MLOps platform in AWS to enabled our Data Science team to create end-to-end ML workflows for automated delivery of machine-learning models.
This was originally published in April 2021
Abstract
In this white paper, you will learn about the MLOps platform that a WWT machine-learning (ML) platform infrastructure team built to reliably deliver trained and validated ML models into production. By deploying the Kubeflow MLOps platform in AWS as a component of our common ML infrastructure, the team enabled WWT data scientists to create end-to-end ML workflows. As part of the MLOps platform deployment, the team built an automated delivery pipeline proof-of-concept to train and productionize a natural language processing (NLP) deep learning model, along with microservices that enable a user to search for relevant WWT platform articles that have been ranked by that productionized model.
"WWT Research reports provide in-depth analysis of the latest technology and industry trends, solution comparisons and expert guidance for maturing your organization's capabilities. By logging in or creating a free account you’ll gain access to other reports as well as labs, events and other valuable content."
Thanks for reading. Want to continue?
Log in or create a free account to continue viewing Deploying MLOps Platform to Enable End-to-End ML Workflows and access other valuable content.