Welcome to the 20th issue of the MLOps newsletter.
In this issue, we will cover a McKinsey report on an AI playbook for executives, analyze a recent paper on curating data for NLP research, share some recent MLOps resources we’ve come across, and discuss ML for COVID detection.
Thank you for subscribing. If you find this newsletter interesting, tell a few friends and support this project ❤️
We recently came across this interesting resource from McKinsey for executives who are thinking about AI and the impact it can have on organizations. While we would take the $$ numbers for AI opportunities with a grain of salt, there might be some compelling arguments you might want to use in your organization.
For example, “breakaway” companies (organizations achieving better scale and value than others) are 13X more likely to spend >25% of their budget on IT. (Let us know if you’re able to get the budget for a new MLOps tool using this! 😃)
Or that “breakaway” companies are 3X more likely to have well-defined analytics roles and career paths.
If you’d like to see more in-depth articles from McKinsey, read this article on market sizing for AI in different industries, this article on how to scale analytics and ML at your company, and this one on red flags with analytics in your organization.
Well, it’s not uncommon for companies to toot their own horn. However, we don’t see it this blatantly from AWS, so we thought we’d share it.
AWS claims (well, it’s actually a marketing research company called Omdia) that Amazon Sagemaker is the top Enterprise MLOps platform today. It’s no secret that Sagemaker is a great ML platform, and the speed at which they’ve added functionality is breathtaking. Over the course of four years, they’ve managed to add all the features you can see below.
This is what the marketing research report had to say:
“AWS is the outright leader in the Omdia comparative review of enterprise MLOps platforms. Across almost every measure, the company significantly outscored its rivals, delivering consistent value across the entire ML lifecycle. AWS delivers highly differentiated functionality that targets highly impactful areas of concern for enterprise AI practitioners seeking to not just operationalize but also scale AI across the business.”
This report only does a comparison between the cloud ML platforms and the end-to-end ML platforms. We are seeing a lot of new MLOps tools which are very specialized (just for labeling, or deployment, or monitoring), and we expect teams to make the best decisions for their very unique circumstances. This might naturally lead to teams picking the best-in-breed solution for each part of the ML workflow, rather than a generic ML platform. We are excited to see how this market develops.
We have previously covered Andrew Ng’s talk about striking the right balance between data and modeling improvements in real-world ML applications. Here we cover a related paper by Anna Rogers arguing about the importance of data curation for robust, inclusive, and secure NLP models.
NLP community is currently investing a lot more research and resources into the development of deep learning models than training data. While we have made a lot of progress, it is now clear that our models learn all kinds of spurious patterns, social biases, and annotation artifacts.
Arguments for curating data
Bias: Human written text contains all kinds of social biases based on gender, race, religion, class, age, etc. Models can learn these biases and even amplify them if we don’t curate the datasets that models learn from
Privacy: Memorization of training data is a known issue in machine learning, including possibly personally identifiable information. This can be a privacy risk.
Performance gap with respect to human-level NLU: The distributions of data in the current NLP resources such as web texts do not seem to provide enough signal for current models to do human-level language understanding
Security: Certain concerns from adversarial attacks could be mitigated by having greater control over datasets.
Arguments against curating data
Faithfully representing the world as it is: A language model should reflect how the language is used in the real world. Any data curation means that the input distribution to the model does not faithfully reflect the real world.
Dataset is already the entire data universe: There are cases where the training data is not drawn from some distribution but represents the entirety of the data universe.
An algorithmic approach to correct biases: Perhaps the way to tackle models learning biases is not to curate the data but to curate model training
Against the long-term direction of AI: As the paper describes it, “The great promise of DL (Deep Learning) was to stop trying to define everything, and let the machine to identify and leverage patterns from huge datasets”. Explicitly curating data seems to go against this ethos.
Irrespective of how one might feel about the role of data curation in NLP (and ML in general), let’s first agree on the desired outcome: we do want more robust, capable, and generalizable models, and we do not want models to learn/amplify stereotypes and biases. At the end of the day, most ML practitioners are pragmatists and will adopt any technique so long as it helps solve these problems at a reasonable cost. We share the author’s view: in most real-world ML applications, the constraints are about data quality & quantity.
New Resources for MLOps
We’ve recently come across a few useful resources that we’d like to share with our readers:
MLOps course: We have shared MadeWithML in a previous newsletter, and here we wanted to highlight a new course on their website focused on all aspects of “ML in production”. It is fairly hands-on, accompanied by tutorials, code snippets, and relies on open-source tools. We’d recommend checking it out if you are looking to get your hands dirty with anything MLOps.
Practical MLOps Book: This book, written by Noah Gift and Alfredo Deza, takes you through what MLOps is, how it differs from DevOps, and how to put it into practice to operationalize your machine learning models.
MLOps World 2021 Event: The second annual MLOps World event is being held virtually this year from June 14-17. In addition to invited talks, we are also looking forward to the interactive workshops. You can check out the full list here.
Bodywork looks like a really useful product to help deploy ML projects to Kubernetes. Many of the challenges involved with bringing ML projects to production are DevOps related, and tools that can automate the deployment process with simple APIs or commands are very handy.
Here is a very well-written article by Alex Ioannides, one of the creators of Bodywork, that shows how a probabilistic model can be “trained” using Bayesian inference and then easily deployed using Bodywork. We learned some fun, new ML concepts by reading through this post!
We covered this story in an earlier issue, but it’s important enough to discuss again.
Here is an insightful thread by Vladimir Haltakov, a machine learning engineer based in Germany, about the challenges with applying Machine Learning to detect COVID-19 from data such as X-rays and CT scans (many of these learnings can be generalized to healthcare more broadly). Three key takeaways for us were:
Researchers from Cambridge considered 415 papers on the topic published from January to October 2020. 0 had any potential for clinical use!
Most papers examined had problems with the quality and quantity of data - datasets were small, unbalanced and in some cases biased (e.g. Some papers used a dataset that contained non-COVID images from children and COVID images from adults).
For ML practitioners and medical professionals alike, if you’re interested to explore the applicability of machine learning to COVID detection, this Kaggle challenge is a good start.
Thanks for making it to the end of the newsletter! This has been curated by Nihit Desai and Rishabh Bhargava. If you have suggestions for what we should be covering in this newsletter, tweet us @mlopsroundup or email us at email@example.com. If you like what we are doing please tell your friends and colleagues to spread the word.