Deploy machine learning applications to production in minutes, rather than months. Automate the transformation of raw data, generate training data sets, and serve features for online inference at scale.
Without the right tooling, data teams have been stitching disparate systems to deploy machine learning into production — spending months, if not years, in the process.
Tecton works best with structured data. Define features in python files using a declarative framework and manage those features through a git repository.
@batch_feature_view(
description="Whether the user has a good credit score, updated daily",
sources=[credit_scores],
entities=[user],
feature_start_time=datetime(2020, 10, 1),
batch_schedule='1d'
)
def user_credit_quality(credit_scores):
return f'''
SELECT
USER_ID,
IF (CREDIT_SCORE > 690, 1, 0) AS USER_HAS_GOOD_CREDIT,
TIMESTAMP
FROM
{credit_scores}
'''
@stream_window_aggregate_feature_view(
description="Mean transaction amount of last hour, 12h, 24h and 72h, updated every 10 min",
sources=[transactions],
entities=[user],
aggregation_slide_period='10min',
aggregations=[FeatureAggregation(
column='amount',
function='mean',
time_windows=['1h', '12h', '24h', '72h'])],
feature_start_time=datetime(2020, 10, 1)
)
def mean_transaction_amount(transactions):
return f'''
SELECT
USER_ID,
AMOUNT,
TIMESTAMP
FROM
{transactions}
'''
@on_demand_feature_view(
description='Whether the transaction amount is considered high (over $10000)',
inputs={'transaction_request': Input(transaction_request)},
output_schema=output_schema
)
def transaction_amount_is_high(transaction_request):
result = {}
result['transaction_amount_is_high'] = int(transaction_request['amount'] >= 10000)
return result
Tecton automatically orchestrates data pipelines to continuously process and transform raw data into features.
Tecton stores feature values consistently across training and serving environments. Easily retrieve historical features to train models, or serve the latest features for online inference.
Continuously monitor data pipelines, serving latency, and processing costs. Automatically resolve issues and control the quality, cost and reliability of your machine learning applications.
Save months of work by replacing bespoke data pipelines with robust pipelines that are created, orchestrated and maintained automatically
Improve your model’s accuracy by using real-time features and minimize sources of error with guaranteed data consistency between training and serving
Increase your team’s efficiency by sharing features across the organization and standardize all of your machine learning data workflows in one platform
Serve features in production at extreme scale with the confidence that systems will always be up and running. Tecton meets strict security and compliance standards
Tecton makes it easy to deploy and operate machine learning with a managed, cloud native service.
Tecton supports best-in-class availability and can scale to serving over 100,000 predictions per second while maintaining <100 ms latency.
Tecton is not a database or a processing engine. It plugs into and orchestrates on top of your existing storage and processing infrastructure.
Tecton authenticates users via SSO and includes support for access control lists. We support GDPR compliance in your ML applications, and are SOC2 Type 2 certified.
Try Tecton, the Fully Managed Feature Platform
Interested in trying Tecton? Leave us your information below and we’ll be in touch.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.