Together, Tecton and Snowflake provide a simple and fast path to building and serving production-grade features that support a broad range of machine learning applications, including fraud detection, recommendation systems, real-time pricing, and much more.
Available to all Snowflake customers, Tecton acts as the central hub for ML features, allowing data teams to define features as code using Python and SQL and then automating production-grade ML data pipelines to generate accurate training datasets and serve up-to-date features online for real-time inference.
Build more powerful models by easily incorporating batch, streaming, and real-time data.
Deliver more business value from real-time ML applications in minutes rather than months.
Continuously improve and iterate on production ML models across teams and use cases.
Tecton allowed us to significantly speed up our time to put models in production, enabling us to get faster feedback on our ML applications, and in, turn build products that deliver substantial value to our customers.
Hendrik Brackmann, Director of Data Science and Analytics
Key Challenges of Production ML
Whether you’re building batch pipelines or already including real-time features in your ML initiatives, Tecton solves the many data and engineering hurdles that keep development time painfully high and, in many cases, predictive applications from ever reaching production at all, including:
- Training-serving skew
- Point-in-time correctness
- Productionizing notebooks
- Real-time transformations
- Melding batch + real-time data
- Latency constraints
- Data scientist and data engineering siloed workflows
- Limited discovery and re-use of features across teams
How it Works
Sitting on top of the Snowflake Data Cloud and its powerful processing engine for Python and SQL, Tecton’s feature platform enables data engineers and data scientists to build production-ready feature pipelines, and serve them at scale across teams, systems, and models, with only a few lines of code.
Under the hood, Tecton abstracts and automates the complex process that transforms raw data from batch, streaming, or real-time sources into features used to train ML models and feed predictive applications in production. Managing the ML feature lifecycle with Tecton not only ensures that feature materializations are always consistent, online for training, and offline for inference, but that they are also stored in a searchable repository for easy sharing and re-use across teams and use cases.