In 2019, when Kevin and I decided we wanted to help companies put machine learning into production, we knew the road ahead would be anything but boring. Not only were we on a mission to solve one of the toughest challenges in the machine learning world, the data problem, but we were also choosing to create a new product category in the already bustling machine learning (ML) ecosystem. We got more than a few blank stares when we told people we were building a feature store.
But when we talked with data scientists and engineers working to operationalize ML at their companies, the excitement was obvious. It validated what we’d learned building Uber’s Michelangelo platform: that the most difficult and expensive part of putting ML into production was managing and transforming the data. To name just a few challenges, you need to eliminate sources of bias like training-serving skew and data leakage, while combining real-time and historical data to make fast decisions at scale based on the freshest information available.
To help more teams use ML operationally (rather than just for analytics), we wanted to bring the functionalities of a mature enterprise feature store to every organization. As demand for our feature store rapidly grew, so did the realization that storing, sharing, and reusing features only solved part of the problem for ML teams. Our users told us that they were really looking for a solution to manage the complete lifecycle of ML features so we developed our product into a complete feature platform for ML, incorporating the feature store capabilities of storing, sharing, and reuse with the ability to build and automate feature pipelines that generate feature values from batch, streaming, or real-time data.
Today, just three years since we founded Tecton, we’re happy to see all types of organizations—from tiny startups to multinationals—using our feature platform to save on costs and get models into production faster. Even better, they’re able to use streaming data and improved feature standardization to build powerful use cases like fraud detection, recommendation systems, and real-time pricing. Tecton is helping make operational ML accessible where it once seemed out of reach, and I couldn’t be more proud. But our journey is just beginning.
Tecton: From a feature store to a feature platform
I’m thrilled and humbled to share that Tecton has raised $100M in a Series C funding round led by Kleiner Perkins (adding to $60M raised in past rounds). Snowflake and Databricks became strategic investors, and Tiger Global and Bain Capital Ventures joined as new investors as well.
In Bain Capital Ventures Partner Aaref Hilaly’s own words,
“It’s just a matter of time until organizations, large and small, integrate real-time predictive applications into their everyday operations. Given the strength of their technology and team, we believe Tecton is well positioned to be the catalyst that helps enterprises experience firsthand the significant uplift of leveraging real-time or streaming data in predictive data products.”
And that’s exactly what we strive to do.
So far, Tecton has focused on managing data for offline training and online inference. Following our customers needs, we’ve added support for useful capabilities like simple and performant real-time and streaming transformations. However, ML teams are still left with a lot of data work when building and running operational ML—for example: management of training data sets, use case–specific serving architectures, event logging, label and ground-truth management, compliance, and monitoring. As we continue to evolve Tecton’s feature platform, our vision is to support all of these capabilities within Tecton’s platform and operational ML dataflow model. We aim to make feature engineering and data operations for ML applications simpler, more reliable, and, frankly, more achievable for ML teams around the world.
As we’ve spoken with countless teams and customers over the past three years, one thing has become clear: To succeed, operational ML needs to be centralized and built on top of the modern data stack, not on separate infrastructure. That’s why we’ve been so focused on partnerships and integrations. Most recently, we were named Databricks’ ML/AI Partner of the Year and Snowflake’s Emerging Technology Partner of the Year. We look forward to continuing to deepen our partnerships so that teams can easily access and leverage all their data to build features for ML.
And, of course, as our customer base continues to grow, so do the platform feature requirements that meet the many needs that arise as they put ML into production. To that end, I’ll share some of our most significant product developments since our last funding:
- Snowflake: Tecton integrates with Snowflake, allowing any Snowflake user to easily operationalize their data for machine learning. Tecton connects to Snowflake as a data source, orchestrates in-place transformations on Snowflake, stores offline feature data on Snowflake, and has released an integration with Snowpark for Python in private preview.
- Redis Online Store: Tecton now supports Redis in addition to DynamoDB as an online store. We added this to support the high-throughput, low-latency, low-cost needs of customers running ML use cases like search and recommendations at scale.
- Managed online store: Tecton orchestrates your infrastructure to provide a feature store experience. However, many customers don’t want the operational burden of managing their own infrastructure—so we built a managed offering to simplify the feature store experience.
- Guided onboarding: Feature stores are a new category and customers are still learning what they can do; we’ve invested in improving the “getting started” experience to demonstrate value immediately.
- Tecton SDK 0.4: We’ve released a simpler version of Tecton’s feature framework to help customers write complex features quickly and easily.
- ACL: Tecton is a hit with enterprises that want a standardized solution for the modern data stack. These enterprises care about isolation and security. We’ve released a solution for controlling access so they can launch features in production with confidence.
In the last year, our customer base has increased 5x and we’ve grown our ARR (annual recurring revenue) nearly 3x. We envision a future where it’s as easy to put ML into production as it is to deploy code, and we’re just getting started. To learn more, request a free trial of Tecton here. Or, if you want to be a part of it all, we’re hiring!