Deploy machine learning applications to production in minutes, rather than months. Automate the transformation of raw data, generate training data sets, and serve features for online inference at scale.
From the Creators of Uber Michelangelo, the ML Platform powering every model at Uber. Now Trusted by Leading ML Teams.
Without the right tooling, data teams have been stitching disparate systems to deploy machine learning into production — spending months, if not years, in the process.
Tecton works best with structured data. Define features in python files using a declarative framework and manage those features through a git repository.
# Specify inputs, entities and compute configuration
@batch_feature_view(
description="Whether the user has a good credit score, updated daily",
sources=[credit_scores],
entities=[user],
feature_start_time=datetime(2020, 10, 1),
batch_schedule=timedelta(days=1),
)
# Define transformation logic
def user_credit_quality(credit_scores):
return f'''
SELECT
USER_ID,
IF (CREDIT_SCORE > 700, 1, 0) AS USER_HAS_GOOD_CREDIT,
TIMESTAMP
FROM
{credit_scores}
'''
# Specify inputs, entities, and compute configuration
@stream_feature_view(
description="Mean transaction amount of last hour, 24h and 72h, updated every 10 min",
source=transactions,
entities=[user],
feature_start_time=datetime(2020, 10, 1),
# Define logic for time-window aggregations
aggregation_interval=timedelta(minutes=10),
aggregations=[
Aggregation(column='AMOUNT', function='mean', time_window=timedelta(hours=1)),
Aggregation(column='AMOUNT', function='mean', time_window=timedelta(hours=24)),
Aggregation(column='AMOUNT', function='mean', time_window=timedelta(hours=72)),
],
)
# Define query to use in the transformation
def mean_transaction_amount(transactions):
return f'''
SELECT
USER_ID,
AMOUNT,
TIMESTAMP
FROM
{transactions}
'''
# Indicate schema of incoming request-time data (e.g. live transaction data)
transaction_request = RequestSource([Field('amount', Float64)])
# Specify inputs and schema of the output
@on_demand_feature_view(
description="Whether the current transaction amount is higher than the user's weekly average.",
sources=[transaction_request, historical_metrics],
schema=[Field('transaction_amount_is_higher_than_average', Bool)],
)
# Define transformation logic
def transaction_amount_is_higher_than_average(transaction_request, historical_metrics):
result = {}
result['transaction_amount_is_higher_than_average'] = (
transaction_request['amount'] >= historical_metrics['amount_7d_mean'])
return result
Tecton automatically orchestrates data pipelines to continuously process and transform raw data into features.
Tecton stores feature values consistently across training and serving environments. Easily retrieve historical features to train models, or serve the latest features for online inference.
Continuously monitor data pipelines, serving latency, and processing costs. Automatically resolve issues and control the quality, cost and reliability of your machine learning applications.
Save months of work by replacing bespoke data pipelines with robust pipelines that are created, orchestrated and maintained automatically
Improve your model’s accuracy by using real-time features and minimize sources of error with guaranteed data consistency between training and serving
Increase your team’s efficiency by sharing features across the organization and standardize all of your machine learning data workflows in one platform
Serve features in production at extreme scale with the confidence that systems will always be up and running. Tecton meets strict security and compliance standards
Tecton makes it easy to deploy and operate machine learning with a managed, cloud native service.
Tecton is built for scale, delivering median latencies of ~5ms and supporting over 100,000 of requests per second.
Tecton is not a database or a processing engine. It plugs into and orchestrates on top of your existing storage and processing infrastructure.
Tecton authenticates users via SSO and includes support for access control lists. We support GDPR compliance in your ML applications, and are SOC2 Type 2 certified.
Try Tecton, the Fully Managed Feature Platform
Interested in trying Tecton? Leave us your information below and we’ll be in touch.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.