Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Infinite loop illustration

Succeeding at MLOps with CD4ML

Artificial Intelligence and Machine Learning usage is growing and delivering value for organizations across all industries. However, operationalizing such systems introduces challenges that span your people, processes, technologies, and organizational structures. MLOps is an extension of DevOps, fostering a culture where people work together to imagine, develop, deploy, operate, monitor, and improve ML systems in a continuous way.

 

Thoughtworks has developed Continuous Delivery for Machine Learning (CD4ML), an approach to implement MLOps that adapts the principles, practices, and tools from Continuous Delivery.

 

In this series, experts from Thoughtworks share their lessons and experiences applying CD4ML to successfully implement MLOps. 

Webinars in the series

Introduction to CD4ML

With Eric Nagler and Iswariya Manivannan

 

In our first session, Eric and Iswariya provide an overview of Thoughtworks’ de-facto approach to MLOps called Continuous Delivery for Machine Learning (CD4ML). Companies commonly face many challenges when implementing an end-to-end Machine Learning process. Because of this VentureBeat says, 87% of data science projects will never make it to production. Our holistic approach, CD4ML, applies continuous delivery practices to developing machine learning models so that they are always ready for production. This allows companies to ensure quality, scalability, repeatability and reliability when undertaking Data Science Projects.

MLOps maturity model

A guide for improving the effectiveness of ML teams

With David Tan and Ada Leung

 

Many teams delivering ML-driven products often find themselves getting trapped in unnecessary detours and unexpected time sinks. Without the right delivery practices and capabilities, teams miss out on opportunities to reduce waste and improve flow. In addition, teams that overlook inclusivity, diversity and safety do so to their own detriment, and altogether these manifest as missed perspectives, missed delivery milestones and team burnout.

 

In this talk, we share Thoughtworks’ collective experience in delivering ML-driven products across multiple organisations, and share a safer and better path for organisations travelling on this ML journey.

Operational AI with analytics modernization

A journey for enterprise intelligence

With Shraddha Surana and Sathyan Sethumadhavan 

 

AIOps and MLOps platforms have grown into a huge enterprise marketplace right now, with vendors claiming that migrating to these new platforms will solve the modernization goals and operational challenges. With overwhelming choices, every enterprise is at a risk of adapting what comes their way. 

 

The truth is to evaluate how these platforms fit to the enterprise context. What are your integration challenges with the existing infrastructure? What level of customisation is required to enable a production line with these news tools? Skill gaps in the org and upskilling journey? Future product roadmap?

 

Despite this marketplace being surplus, enterprises are still trying to figure out the modernization strategy. This is because every organization has a different current state, starting points, end points, budget constraints , people skills and product roadmap.

 

Listen to this session, if you have these enterprise goals:

a) “Democratize AI” within the enterprise, where data, models and AI services follow a self-service approach

b) Develop “Streamlined  and Unified Governance” in your enterprise, where analytics and data science teams are still enabled for open innovation and not to operate with limited ecosystem

c) “Enterprise View” of all the analytical and data science assets for the businesses to collaborate and own innovations

d) Build the enterprise culture of “Proactive Decisions” , where business are enabled with data, model explainability and self-healing techniques

Guide to evaluating MLOps platforms

With Ryan Dawson and Lucy Fang

 

There’s a plethora of tools and platforms to help organizations get machine learning models into production. However, the amount of options can be overwhelming and navigating the trade-offs is difficult. Should we buy or build a platform? When buying, which choices should we consider? What should be the key selection criteria? Just understanding which software to evaluate can be confusing.

 

Watch the recording of our webinar with Ryan Dawson to:

  • Learn to understand the categories and see how to find the best fit for your organization
  • Make sense of where the cloud providers, specialist platforms, and open source fit in
  • Understand the roles and personas involved and how their needs vary
  • See how to structure an evaluation process and how to leverage open source material to save you from burning huge amounts of research time.

Privacy and security automation in CD4ML

With Katharine Jarmul

 

How can we improve privacy and security of our systems by leveraging CD4ML workflows? As you continuously improve your machine learning infrastructure, you'll likely encounter privacy, security and compliance challenges. These are inherent in every machine learning project, and have special requirements when new models are actively deployed on a regular basis. We can use the CD4ML process to not only track and monitor those issues, but also proactively prevent them. By incorporating testing, automation and documentation in our CD4ML workflows, we provide better oversight, accountability and transparency into our "black box" systems, enabling a clear audit trail, improved privacy and security guarantees and safer production deployments.

Turning the dials of continuous delivery to eleven with reinforcement learning

With Danilo Sato and Max Pagels

 

Reinforcement learning is a technique for creating learning systems in ever-changing environments. It enables you to learn in near real-time from feedback gathered from actual users in production. This brings extra challenges on how to architect, monitor, and evolve such systems. In this talk, Max and Danilo discuss definitions, motivations, and high-level recipes for applying Continuous Delivery discipline in these highly dynamic Machine Learning systems.

Past webinars