Organization and Processes Archives | Page 2 of 2 | Tecton

Tecton

ML Projects Aren’t An Island

We’ve all seen the dismal and (at this point, annoying) charts and graphs of ‘>90.x% of ML projects fail’ used as marketing ploys by various companies. What this largely simplified view of ML project success rates buries in misleading abstraction is the fact that some companies have a 100% success rate with long-running ML projects while others have a 0% success rate.

This talk is intending to go through a simple concept that is obvious to the 100% success rate companies but is a mystery to those that fail time and again. Firstly, that a project is not an island. It has dependencies on other teams (both technical and non-technical), that the DS team doesn’t need to be heroic in pursuing the most complex solution, and how establishing solid engineering practices is what will set apart the projects that will succeed and those that will fail.

The main points that will be covered:

  • Can you really solve this with ML? Should you?
  • Make sure you have the data consistently and that’s it’s not garbage (feature stores are great!)
  • Start simple and only add complexity if you need to
  • Involve the business (SMEs)
  • Build code that your team can maintain and test
  • Monitor your data and predictions so you know when things are about to break

Read More

Data Engineering Isn’t Like Software Engineering

There’s often a push for data engineers and data scientists to adopt every pattern that software engineers use. But adopting things that are successful in one domain without understanding how it applies to another domain can lead to “cargo cult” type behavior. There are fundamental differences why working with data may require different workflows and systems, and that’s OK! … Read More

Model Calibration in the Etsy Ads Marketplace

When displaying relevant first-party ads to buyers in the Etsy marketplace, ads are ranked using a combination of outputs from ML models. The relevance of ads displayed to buyers and costs charged to sellers are highly sensitive to the output distributions of the models. Various factors contribute to model outputs which include the makeup of training data, model architecture, and input features. To make the system more robust and resilient to modeling changes, we have calibrated all ML models that power ranking and bidding.

In this talk, we will first discuss the pain points and use cases that identified the need for calibration in our system. We will share the journey, learnings, and challenges of calibrating our machine learning models and the implications of calibrated outputs. Finally, we will explain how we are using the calibrated outputs in downstream applications and explore opportunities that calibration unlocks at Etsy. … Read More

Panel: Building High-Performance ML Teams

As Machine Learning moves to production, ML teams have to evolve into high-performing engineering teams. Data science is still a central role, but no longer sufficient. We now need new functions (e.g. MLOps Engineers) and new processes to bridge the gap between traditional data science and the world of software engineering. In this panel discussion, we’ll discuss how high-performing ML teams are organized to build and deploy production-quality ML models with engineering best practices. … Read More

Building Malleable ML Systems through Measurement, Monitoring & Maintenance

Machine learning systems are now easier to build than ever, but they still don’t perform as well as we would hope on real applications. I’ll explore a simple idea in this talk: if ML systems were more malleable and could be maintained like software, we might build better systems. I’ll discuss an immediate bottleneck towards building more malleable ML systems: the evaluation pipeline. I’ll describe the need for finer-grained performance measurement and monitoring, the opportunities paying attention to this area could open up in maintaining ML systems, and some of the tools that I’m building (with great collaborators) in the Robustness Gym and Meerkat projects to close this gap. … Read More

Panel: Challenges of Operationalizing ML

Our panel discussion will focus on the main challenges of building and deploying ML applications. We’ll discuss common pitfalls, development best practices, and the latest trends in tooling to effectively operationalize ML … Read More

What is the MLOps Community?

The MLOps community started in March 2020 as a place for engineers and practitioners to get together and share their knowledge about operationalizing ML. Since its inception, it has morphed into a community of more than 3k members with an array of initiatives ranging from podcasts, meetups, reading groups, open office hours and Engineering Labs. In this talk Demetrios and Ivan will go through some of the greatest learnings from interviewing hundreds of MLOps practitioners to explaining the hands-on MLOps initiative; Engineering Labs. … Read More

The Only Truly Hard Problem in MLOps

MLOps solutions are often presented as addressing particularly challenging problems. This is mostly untrue. The majority of the problems solved by MLOps solutions have their origins in pre-ML data processing systems and are well addressed by the solutions that we devised to those problems. Data ingestion, feature storage, model serving and even model management and training are all relatively well addressed by traditional data processing approaches. And all of these can be solved with little or no understanding of the modeling challenge your system is designed to solve.

The only truly ML-centric, hard problem in MLOps is in the data. There is no automated, generalized way to mitigate the impact of subtle changes in the distribution of training data or of undetected changes in the semantics of that data. A general solution to this problem would unlock more usability and trust in ML than any other improvement we can make. … Read More

Supercharging our Data Scientists’ Productivity at Netflix

Netflix’s unique culture affords its data scientists an extraordinary amount of freedom. They are expected to build, deploy, and operate large machine learning workflows autonomously with only limited experience in systems or data engineering. Metaflow, our ML framework (now open-source at metaflow.org), provides them with delightful abstractions to manage their project’s lifecycle end-to-end, leveraging the strengths of the cloud: elastic compute and high-throughput storage.

In this talk, we will have one of our data scientists working in Content Demand Modeling present one of the challenges that they faced earlier this year. We will use that as a backdrop to present the human-centric design principles that govern the design of Metaflow and its internals. Finally, we will tie up the presentation outlining the team’s experience using Metaflow and the impact of their work. … Read More

Let's keep in touch

Receive the latest content from Tecton!

© Tecton, Inc. All rights reserved. Various trademarks held by their respective owners.

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Tell us a bit more...​

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​