Systems and Architecture Archives | Tecton

Tecton

Enabling rapid model deployment in the healthcare setting

Fireside Chat: Is ML a Subset or a Superset of Programming?

Join Mike and Martin in this fireside chat where they’ll discuss whether ML should be considered a subset or a superset of programming. ML can be considered a specialized subset of programming, which introduces unique requirements on the process of building and deploying applications. But, ML can also be considered a superset of programming, where the majority of applications being built today can be improved by infusing them with online ML predictions. Mike and Martin will share their thoughts and the implications for ML and Software Engineering teams. … Read More

Lakehouse: A New Class of Platforms for Data and AI Workloads

In this talk, Matei will present the role of the Lakehouse as an open data platform for operational ML use cases. He’ll discuss the ecosystem of data tooling that is commonly used to support ML use cases on the Lakehouse, including Delta Lake, Apache Hudi, and feature stores like Feast and Tecton. … Read More

Engineering for Applied ML

Applied ML consists of ML algorithms at its core and engineering systems around it. For over a decade as an applied ML practitioner, I have built a number of such engineering systems to help unlock the full potential of ML in a variety of problem domains. This talk is about my learnings in building those systems and patterns that I’ve found to work well across applications. … Read More

DIY Feature Store: A Minimalist’s Guide

A feature store can solve many problems, with various degrees of complexity. In this talk I’ll go over our process to keep it simple, and the solutions we came up with. … Read More

Workshop: Bring Your Models to Production with Ray Serve

In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster.

Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators – using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code.

We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray. … Read More

Workshop: Operationalizing ML Features on Snowflake with Tecton

Many organizations have standardized on Snowflake as their cloud data platform. Tecton integrates with Snowflake and enables data teams to process ML features and serve them in production quickly and reliably, without building custom data pipelines. David and Miles will provide a demo of the Tecton and Snowflake integration along with coding examples. Attendees will learn how to:

– Build new features using Tecton’s declarative framework

– Automate the transformation of batch data directly on Snowflake

– Automate the transformation of real-time data using Snowpark

– Create training datasets from data stored in Snowflake

– Serve data online using DynamoDB or Redis … Read More

Intelligent Customer Preference engine with real-time ML systems

In an omni-commerce space such as Walmart, Personalization is the key to enable customer journeys tailored to their individual needs, preferences and routines. Moreover, in e-commerce, customers’ needs and intent evolve with time as they navigate and engage with hundreds of millions of products. Real-time session-aware ML systems are best suited to adapt to such changing dynamics and can power intelligent systems to provide 1:1 personalized customer experiences, from finding the product to delivering it to the customer. In this talk we will look at how we leverage session features to power customer preference engines in real-time applications at Walmart scale. … Read More

Training Large-Scale Recommendation Models with TPUs

At Snap, we train a large number of deep learning models every day to continuously improve the ad recommendation quality to Snapchatters and provide more value to the advertisers. These ad ranking models have hundreds of millions of parameters and are trained on billions of examples. Training an ad ranking model is a computation-intensive and memory-lookup-heavy task. It requires a state-of-the-art distributed system and performant hardware to complete the training reliably and in a timely manner. This session will describe how we leveraged Google’s Tensor Processing Units (TPU) for fast and efficient training. … Read More

Machine Learning, Meet SQL: When ML Comes to the Database

SQL has evolved beyond its relational origins to support non-relational abstractions like arrays, JSON, and geospatial data types so it shouldn’t surprise us that SQL is now being used to build and serve machine learning models. In this presentation, we’ll review how Google Cloud BigQuery supports regression, classification, forecasting, dimensionality reduction, and collaborative filtering. Feature processing, hyperparameter tuning, and evaluation functions are described as well. The talk concludes with a discussion of good practices for building and serving ML models in Google Cloud BigQuery. … Read More

Panel: Common Patterns of the World’s Most Successful ML Teams

There’s a lot we can learn simply by observing the most successful ML teams in the world: how they operate, which technology stack they use, which skill sets they value, and which processes they implement. In this panel, MLOps thought leaders will come together to share their learnings from speaking with hundreds of leading MLOps teams. They’ll discuss their insights from identifying common patterns between these teams. … Read More

Workshop: Building Real-Time ML Features with Feast, Spark, Redis, and Kafka

This workshop will focus on the core concepts underlying Feast, the open source feature store. We’ll explain how Feast integrates with underlying data infrastructure including Spark, Redis, and Kafka, to provide an interface between models and data. We’ll provide coding examples to showcase how Feast can be used to:

– Curate features in online and offline storage

– Process features in real-time

– Ensure data consistency between training and serving environments

– Serve feature data online for real-time inference

– Quickly create training datasets

– Share and re-use features across models … Read More

Let's keep in touch

Receive the latest content from Tecton!

© Tecton, Inc. All rights reserved. Various trademarks held by their respective owners.

The Gartner Cool Vendor badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Request a Demo

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​