Accelerating Deployment of AI with MLOps Orchestration
MLOps holds the key to accelerating the development and deployment of AI, so that enterprises can derive continuous business value, deploy and monitor a growing amount of AI applications in production. But in our journey to create continuous development and delivery (CI/CD) of data and ML intensive applications, we often need to integrate many tools and implement a lot of glue logic to make deployment of AI simpler, more efficient and scalable, and to account for a growing set of use cases of growing complexity. Furthermore, we need our tools to be usable for data scientists, not just developers or dev ops people. This is a difficult, time and labor-intensive task. Is there finally an open source tool which can orchestrate all of the process and various open-source frameworks, abstract away the complexity while automatically delivering scalable and production ready deployments?
In this workshop we will explore the concept of MLOps Orchestration and how to build an MLOps stack over Kubernetes with Kubeflow, MLRun, Nuclio, etc. We will show how it can simplify the process of getting data science to production in any environment (multi-cloud, on-prem, hybrid), from the step of data collection and preparation (across real-time / streaming, historic, structured, or unstructured data), through automated model training to model deployment and monitoring. We will demonstrate how to drastically cut down the time and efforts needed to get data science to production. We’ll show how to map a business problem into an automated ML production pipeline and identify the right tools for the job, and ultimately how to run Al models in production at scale to accelerate business value with AI – all using Kubernetes and open source technologies. The session will include a live demo and real customer case studies across use cases such as fraud prevention, real-time recommendation engines and NLP.