Tecton

Workshop: How to Build a Real-Time Fraud Detection Application With a Feature Platform

apply(conf) - May '23 - 60 minutes

In this workshop, we’ll show how to build a real-time fraud detection system using Tecton’s Feature Platform. We’ll walk through the process of building, deploying, and serving real-time data pipelines.

We’ll present common architectural patterns and explore three categories of features you’ll typically need in your real-time fraud model:

  • Batch pre-computed features
  • Streaming based features
  • Real-time features

You will learn how to:

  • Build new features
  • Automate the transformation of batch data
  • Automate the transformation of streaming and real-time data
  • Create training datasets
  • Serve data online using DynamoDB or Redis
  • Build fraud detection system using Tecton

Vince Houdebine:

All right. Welcome everyone to this workshop to close off a great day of conferences and talks. I’m your host, Vince. I’m a solutions architect here at Tecton, based out of New York, originally from France, and my background’s in data science and machine learning. So today, this workshop is titled How to Build a Real-Time Fraud Detection System or Application with a Feature Platform. So we have 60 minutes ahead of us and what we’ll do in the next 60 minutes is we’ll walk through an example of building a real-time fraud detection system from scratch. So we’ll start from a blank slate and then I’ll highlight how feature platforms like Tecton can help speed up the process and also reduce the engineering effort that’s associated with standing up such a system.

So we’re going to walk through the creation of a real-time fraud detection application with Tecton. And the sessions objectives is we eventually want to have a fraud inference service that can be called from our POS application backend. So our POS application backend to the right will call our inference service with some information about the current transaction that’s being done at the point of sale. It’s going to send over the user ID, the amount of the transaction, the merchant and so on and so forth. And then our inference service will return a probability of fraud or our model prediction to make some decision on whether or not the transaction should go through.

All right, so with this inference service, as you can see, the amount of information that we’re passing at request time from the point of sale isn’t going to inform much of the model predictions, right? So in general, these inference services, they rely on a second component, which are, I’m going to call them a feature retrieval service. So in real time, this inference service is going to need to go and fetch some precomputed features from this features retrieval service. Maybe it’s going to need to compute any features in real time about the current transaction and then return a feature vector that the model will then ingest to produce a prediction, right?

So we see the three main components here in our system. Obviously, depending on your organization and your engineering stack, you might break this down into even further or even more microservices. But here this is really just a high-level representation of what’s going to happen. And for the inference service, there are many solutions out there that people can leverage to just deploy a trained model object as a real-time API. So I can leverage cloud native services like SageMaker endpoints. I can build my own endpoints through FastAPI or Flask, can run that in the container. I can leverage things like Databricks, Serverless real-time inference, which is what we’re going to use today just to simplify things a little bit and essentially have a model that is sitting behind an API endpoint.

Now for the feature retrieval service part, this is where I think a lot of the complexity is, right? So this feature retrieval service, it actually needs to retrieve features that are built from a variety of data sources, serve that in real time, perform any real-time computation and then return a feature vector within the right latency contract to that inference service. And there aren’t that many options out there for this feature retrieval service. So a lot of people end up building different bricks and assembling them and putting them together to achieve the right results.

So typically what we have here is we have this question mark. To the left, we have our data sources that we can use to build some features like our historical transactions that are stored on S3 or Delta Lake. We also have transactions that are being streamed as they happen through Kinesis. And so the question is, “How do we ingest these different data sources, do the right feature engineering, compute these features so that they can be retrieved at low latency in production?” And what we’ll use today to achieve this is Tecton. So feature platforms I think are starting to become the de facto solution to manage and orchestrate all of the data pipelines, to take some raw data from streaming sources, batch sources, real-time sources, compute feature values and then serve them at low latency to our model APIs.

And so today what we’ll see is how to use Tecton to essentially develop features, deploy them out to production and then feed a live inference service. So what we’ll walk through today in this workshop, and if you have access to a Tecton instance that you can try to follow along. If not, you can just watch the screen, sit back and relax. So first thing we’ll do is developing and testing features, right? So we will follow a typical workflow of a data scientist having to build a real-time fraud detection service.

So before I can even start thinking about production, I need to train a model. So we’ll develop and test some features directly in a notebook. We’ll train our model. We’ll deploy our model features for real-time, so making sure that they are available at low latency in our production environment. We’ll deploy our model for real-time as an inference API and then we’ll put all of it together into a single endpoint that can be called from our POS application, retrieve some features on the fly, feed that into a model and then return an application, all of that within a low-latency contract.

And before I jump into my notebook and actually start writing some code, it’s usually good practice to talk with the business and try to understand some of the features that I could build for my application. And so before I even start jumping into my notebook and build some features, I’ll go and talk to my fraud analyst and I’m talking to my fraud analyst and they’re telling me they found two main fraud patterns based on recent customer disputes. So these are what I’m going to focus on to build my features. The first one is transaction amounts that are outside of the user’s usual behavior. Might be indicative of fraud. So we’ll have to build some features to capture this pattern.

And then the second one is many small transactions on a card within a short time window. So if you think about common patterns for fraudsters, when they acquire credit card details, they’re going to try to do several small transactions very quickly on a card or maybe they’re going to try to do many small transactions over a certain window of time so that they go unnoticed. And we’ll need to make sure we can build some features to detect them. So this is the gameplan. We’re going to try to build these features. We’re going to build these features. We’re going to train a model, deploy these feature pipelines out to production, deploy our model to production as a rest API, and then put all of this together into a single endpoint.

So let’s go ahead and jump into our model development interface, our notebook. In this case, I’m in a Jupyter notebook, but I could be in another development interface. This is really just more for a convenience. And what we’ll do first is we’ll start looking at our raw data. So what I’ll do here is I’ll just read in a file that’s sitting on S3 that contains some transactional data. So here, let’s just load this and display this into my notebook. All right, so this is what we’re working with. This is our historical data that we’re going to need to train our model on. As you can see, it’s a log of transactions that happened at different merchants on different user accounts. And then here, we have our label, which is fraud column, which is just going to be a binary indicator of fraud.

So we have a user ID column and a transaction ID category, the amount of the current transaction, the merchant, the location of the current transaction as well as the timestamp of the current transaction. So if I wanted to go ahead and train a model on this, chances are the model wouldn’t have really great performance. Typically, what I’m going to want to do is enrich this raw data frame with some features that I’m going to define and then compute in this notebook. So what we’ll do in order to do that is we’ll leverage Tecton’s framework. And Tecton is a feature platform that’s going to simplify and streamline the development and deployment of features to production.

One of the nice things with a Tecton is we have a declarative framework that allows you to define features in a declarative manner, push these features out to Tecton and then Tecton is going to set up all of the infrastructure and orchestrate all of the jobs to compute these features, update your feature stores with fresh feature values and then serve these feature values out to your models. One of the nice things with Tecton is it just comes as a Python package that I can import in my notebook and let me just make this notebook slightly bigger for folks to see. And here what we’re going to do is we’re going to start defining and then testing Tecton objects directly in our notebook.

So after this is done, we’ll walk through the actual deployment process for features defined in Tecton. But one of the nice things with Tecton is I can actually interactively develop and test features directly in my notebook before even impacting my production feature store or feature platform. So here, let’s just import a couple modules from Tecton.

Speaker 2:

Hey, dude, one thing, all we see is the slides. We don’t actually see the notebook.

Vince Houdebine:

Okay, good catch. Let me-

Speaker 2:

Yeah, we may have to reshare.

Vince Houdebine:

Sharing. Okay, I’m sharing my entire screen now, so I should be good. Can you guys see my notebook?

Speaker 2:

Let’s see. We can. We can. Let me see this. Yeah. Sweet. Maybe, can you make it a little bit bigger? There we go. Awesome. That’s great.

Vince Houdebine:

All right. Can you guys hear me too?

Speaker 2:

Yup, I hear you great.

Vince Houdebine:

All right. Awesome. Thanks for the call, D. All right, so I knew there were sharing challenges. So obviously, the demo gods weren’t with me today, but let’s just start from scratch. What I did here in this notebook is I just read in a file from S3, displayed it in my notebook. As you can see, I have just a very simple data frame with user ID, amount, a timestamp column to indicate the time of the transaction. And then an is_fraud column, which is going to be my label for my model. So as I mentioned, let’s import a Tecton SDK and start defining and testing Tecton objects in my notebook. So what I’ll do here is I’ll just use this utility function to automatically validate any Tecton object that I define in this notebook. And then first thing, we’re going to do is we’re going to create what we call a data source. And a data source is really just going to be an abstraction of our input data that will then use to build some features on top of.

So here I’m creating a batch source with Tecton. It’s pointing to that same S3 bucket that I read the above file from. And this is the input data that we’re going to use to define some of our batch features. Now, when we define features in Tecton, these features are going to be linked to what we call entities. So entities are representations of business concepts in your organization. And from a technical perspective, your entity is going to then become the primary key that is going to index all of your features that are defined in. So here I’m defining my user entity. It’s my credit card user and then my joint keys are going to be my user ID column that I’m seeing here. So all of my features that relate to my credit card user will have a primary joint key that’s going to be my user ID.

All right, so what I can do now is run this cell. So I’ve defined two objects. One is a data source and then one is just an entity. So next thing we’ll want to do now is we’ll want to actually start building some of the features that our fraud analyst told us about. And so the first one, if you remember, was looking at a user’s long-term behavior on a card. So for example, “What is the average transaction amount on a card in the last 90 days or in the last 60 days?” But it’s also going to compare this long-term behavior with the current transaction. So what we’ll first do is we’ll define a batch feature in Tecton that will be refreshed on a daily basis and that will compute the average transaction amount on a card at different window sizes.

And then once we’ve done this, we’ll define a on-demand feature, which is going to be executed in real time, which is going to compare the current transaction amount with the average transaction amount that’s computed in batch. So here let’s just import some of our Tecton modules to define what we call a batch feature view and then just some util utility functions, so we’ll import our date time functions and we’ll see how these are being used in Tecton’s declarative framework. So here, I’m going to go ahead and define what we call a batch feature view. So a batch feature view in Tecton is going to help me define a group of features that are going to be refreshed in batch on a schedule that I’m going to define when defining my feature view.

So in Tecton, feature views are defined through a couple things. The first one is going to be the function that will return the execution logic that will define how my feature will be created, so, “What’s the code that I’m going to apply here?” And then this decorator right here will tell Tecton how to construct this feature, “So how often I want to refresh these feature values? How far back to backfill these feature values in time? Do I want to apply any additional window aggregations using Tecton’s aggregation framework and so on and so forth?” So this decorator right here is really what’s going to tell Tecton how to spin up all of these jobs to populate my feature store with fresh feature values essentially.

So here, adding in a couple of different attributes to our decorator. First one is going to be the sources. So these are the data sources that we’ll use as inputs to this feature group or feature view, the entities that this feature view is going to relate to and then the mode here is going to be set to Spark SQL. So any logic that I’m going to define in this user transaction aggregates function is going to be done … So here, let’s just write a simple select statement in Spark SQL. And so what we have here is essentially just a select statement that’s going to select my user ID, my amount and then my timestamp columns from my transactions data source.

Now if you remember, what I wanted to do is actually compute some aggregates on top of this. So I want to compute some window aggregates, “Essentially, what’s the mean of the amount column for a 30-day rolling window?” And Tecton has a declarative framework that allows you to simplify the definition of these aggregations. So here, what I can do is actually define a list of aggregation in my batch feature view that will be computed on top of this query right here. So in my aggregation list here, I only have one aggregation that’s going to aggregate on the amount column. It’s going to apply a mean function and then it’s going to apply this function over a 30 days’ window.

So as you can see, I don’t really have to write any complex window function in my Spark SQL code. I can only pass this information to Tecton and then Tecton will compute these aggregations for me in a way that’s actually optimized for production. So here, I’m just going to add in a few things. So I’ll say, “This 30-day aggregation needs to be refreshed on a daily basis.” So aggregation interval is set to one day. And then here I’ll say that, “This feature needs to be backfilled up until the 1st of January 2021.” So in terms of computing historical versions of my feature values, all I have to do is provide Tecton with this feature start time and then Tecton is going to automatically spin up all of the backfill jobs to compute these feature values in the past, so that I can use them to train a model.

I can add some additional information right here. So what I’m going to say is that this feature needs to actually be written to my online store, which in this case is going to be a DynamoDB instance. It could be Redis as well. And here we’re also going to write these feature values to our offline store for offline dataset, training dataset generation. I can also add in some metadata like my feature name, some description and so on and so forth. But this is essentially what we’re working with here. We defined a simple Spark SQL query. And then using this batch feature view decorator, we’re telling Tecton how to compute the features that we wanted.

So what I can do is go ahead and execute this cell. All right, my bad. I need to pass in the source right here. So here, I define my batch feature view. And what I want to do as a data scientist is try this out on my input data frame, right? “Can I see a sample of feature values that are computed using this logic?” So what I can do here is I can use my batch feature view function and then I can use Tecton’s get_historical features function in order to compute these feature values and actually enrich our spine data frames. So that’s the initial data frame that I showed you with these new feature values. So what we’re going to do is we’re going to call to Spark on this to get the output as a Spark data frame and then we’ll call this display just to show a sample of that data directly in my notebook.

Okay. So when I run this, Tecton is essentially just going to validate every single object that I defined. It’s going to validate the input data source, the schema of my feature and so on and so forth and then compute my feature on the fly and return this enriched data frame. So this is exactly the same data frame we had previously, except that Tecton has now computed this feature, user transaction counts amount mean 30 days. So a 30-day window gets refreshed on a daily basis. Okay, so just with this simple feature definition, I can essentially just tell Tecton how to compute my aggregations on my dataset and then just enrich an existing data frame with some feature values.

So actually, right now, this is only computing one feature, so I think we can probably do better. So instead of having just one aggregation right here, what we’ll do is we’ll compute several kinds of aggregations with several kinds of window sizes. So here we’ll do a one-day, 30-day, 60-day, 90 and 120-day window and then we will compute different aggregation functions like min-max, the mean amounts, the sum, the standard deviation and so on and so forth. And so now what we can do is we can define a list of aggregation like this that we’re going to then pass to Tecton. So we’re going to change our feature definition and we’re going to pass in our aggregation list.

Okay, so here essentially for each value pair in these two lists, we’re going to compute an aggregation in Tecton. So that’s going to compute a good amount of features that we think can help our prediction model. Okay. All right, let me see. Okay, so now we’ve defined our aggregation, our new aggregations and our new batch feature view. So let’s go ahead and compute these updated feature values. So here we’re going to let this Spark job run, but essentially what we’ve done is, in our notebook, we’ve computed a variety of different lagged aggregations on top of this existing training dataset.

So you’ll see here we have many different features now that have been populated by Tecton with different aggregation functions, different window sizes and so on and so forth. So we’re about 20 minutes in. I already have a good amount of features that are defined with Tecton. And the amount of effort that was required to do that was pretty minimal. Now, this is obviously not enough, right? These features are computed on a daily basis. They’re refreshed every day. They’re catching maybe some long-term behavior of the user, but our fraudsters are more smarter than this. What they do actually needs to be measured in real time or in near real time.

And so we’re going to need to compute more features than just these batch features. So what we’ll do here is we want to understand how far from a user’s usual behavior a certain transaction is, so the current transaction. And a measure of how far a certain value is from its mean is what we call the Z-score. And the Z-score essentially takes in the current transaction and is going to compute how many standard deviations this current feature value is away from the mean. So how far a certain feature value is away from the normal feature, the normal feature values that we expect for this user.

So here, what we’ll use is we’ll use an on-demand feature view in Tecton and on-demand feature views are computed from data that’s passed to the API call, so for example, the current transaction amount, the current transaction timestamp, the current merchant and so on and so forth. And then these on-demand feature reviews can also depend on some pre-computed data. So our on-demand feature view is actually going to depend on the batch feature review that we just defined before because what we need to compute our Z-score is we need to look at the mean and then the standard deviation over a certain time window.

So here, let me just import a couple of modules from Tecton. And what I’ll do here is I’ll define a request schema or a request source in Tecton. So here basically, this is what our on the demand feature view is going to expect as an input coming in from the payload of the API call to our feature retrieval service. And as you can see, what we’re going to expect is a user ID and then an amount. So based on the user that’s making the transaction and then the amount of the transaction, we’re going to want to compute the Z-score. So here, let’s just add in these two definitions right here. We’re creating a request source object in Tecton. That’s going to be the input of our feature view. And then here, we’re defining the output of our feature view, which is just going to be our Z-score transaction amount feature.

So I’m going to go ahead and paste my feature definition right here for simplicity and then walk you through the definition. So here, as you can see, the decorator is an on-demand feature view decorator, meaning that we’re telling Tecton that this specific features are going to be computed in real time. The description will give some information about our feature, and then if we look at the sources for our on-demand feature review, we’ll have our transaction request, which is essentially the request source object that we defined here. So this is what we’re expecting to get from the API call. And then user transactions aggregate is actually the batch feature view that we defined a little earlier.

So there’s two sources here. One that’s coming in real time through the API call and one that’s fetched from the online store. And so here in my logic definition, I’m actually just applying some pandas functions to subtract the mean amount over a 60-day window to the current amount and then divide this by the standard deviation. And what this is going to return is specifically the Z-score of the current transaction based on a user’s behavior in the prior 60 days before the current transaction. So again, what I can do is run this and then I’m also able to, similarly as what I’ve done before, call my get_historical features function and then compute these features directly in my notebook.

So while this is going to get executed, I just want to recap what we’ve done here. We’ve computed some features in batch. These features are some long-term aggregations that summarize a user’s behavior on an account. For example, the mean amount over the 90-day window. And then we’ve also defined an on-demand feature that will compute these feature values in real time and then merge real-time data with some precomputed feature values. Here, if we go to our last column, we’ll see that we have our Z-score of the current transaction that’s been computed and then that’s been populated for these historical values.

So quick time check here. We’re 30 minutes in. We have a couple features that have been defined. We’ll just define one more feature and then we will actually just train our model, deploy our model, deploy our features and then look at some real-time API calls to these services. So what we’re going to want to do now is tackle that second fraud pattern that our fraud analyst mentioned. Many small transactions on a card in a short time window before the current transaction. So here, what we’re going to want to do is, for a given transaction, we want to return the number of transactions on this card in the 30 minutes prior to the current transaction.

So these are features that can’t be computed in batch. We’re going to need to compute them from our streaming events. And so Tecton has another kind of feature that we call streaming feature views that will essentially allow you to define features similarly to what you’ve done in batch on event streams. So here what we’re going to do is we’re essentially just pointing to a Kinesis stream and we’re going to create a stream source that we’re going to use as an input to our features. So not going to walk you through the entire code, but essentially, this stream source is going to point directly to this Kinesis stream. And this is what’s going to allow us to very easily define streaming aggregations on top of these events.

So here running this and what we’ll do is we’ll define a streaming feature view. So here, same thing as the batch feature view. As you can see here, we’re actually applying some PySpark code. So in our feature view definition, we will actually define a PySpark function. And here, we’re really just selecting three different columns from our stream source. So use your transaction ID and then timestamp. And what we’re going to do is we’re going to apply an aggregation, still using Tecton’s aggregation framework to count the number of transactions on this stream in the 30 minutes prior to the current transaction.

And here, we’re going to recompute these aggregations every five minutes, but we could also decide that we actually have higher freshness requirements and we want to compute these in a continuous way. So every time a new event is pushed to the stream, these aggregations are going to be updated. So here, very simple, we’re still using the same framework that we’ve used before. And I can define my stream feature view right here. Same thing as before, what I can do is actually compute historical values for this stream. I’ll show you in a second. I can actually also compute streaming values for this new feature definition and then see live these new feature values being computed.

So here again, what we have is we are still our spine data frame with our training events. And then here we have our transaction counts in the last 30 minutes on these cards. So you’ll see that there’s actually a couple that have one transaction in the last 30 minutes on this card prior to the current transaction. All right, so what we’ve done here is we’ve defined really three main features. One batch feature, one on-demand feature that’s going to be executed in real time and then one streaming feature that’s computing some aggregation on top of streaming events.

So I tested them interactively in my notebook. They seem to be computed the computing the right feature value. So what I’d want to do is actually generate a training dataset that contains all of these features in a point in time accurately, so that I can then train a machine learning model. So what I’m going to do is I’m going to create what we call a feature service, which is a Tecton object that’s going to allow me to expose different groups of features to my model. So here I’ve got my user transaction aggregates, my Z-score for my current transaction and then my user transactions count, which are going to be returned by this feature service.

So let’s go ahead and run this. And what I can do now is same thing as before. I’m going to call this get_historical features function so that Tecton will enrich my spine data frame with all the different features that I defined. And then I can use this to train a model. So this is going to take a little bit of time here. We’ll see that Tecton is computing three feature views directly from the raw data sources. So these are the batch and then the stream feature views and then the on-demand feature view. So the one that’s going to be executed in real time is being computed ad hoc using a Python UDF in Spark.

All right, so now we’ve got our training dataset that’s been enriched with all these different features, whether these are going to be streaming-based features, batch-based features, real-time features. All of these are actually in our training data frame. So quick time check here and I’m going to go ahead and check the questions in the Slack. If you have any questions, please feel free to post them in the Slack. I’m going to be monitoring this and I’ll try to answer some of these questions. All right, I see a question from James. “An on-demand feature view can also take a feature from the offline feature store as an argument, right?”

Typically, the on-demand feature views when serving these values online, they’re going to go and fetch the freshest feature values from the online store only. Now when you’re actually leveraging an on-demand feature review offline to generate a training dataset, this is going to point to the offline store. All right, so now we’ve got our training dataset and I’m not going to walk through the next steps, which is essentially just using Scikit-learn and using a random forest to train it on this input dataset. So I’ll just go ahead and copy and paste my code, but I’m essentially just leveraging Scikit-learn pipelines to do my feature and then defining a random forest classifier and then training that random forest classifier and tracking all of that using MLflow.

So here I’ve got my train test split. What I’m going to do is actually use MLflow to track my model training experiment, log my model artifact so that I can then deploy it pretty easily to production. So here I’ve got an experiment that’s already set up. I’m going to create a new run here. And then here, I’m really just fitting my pipeline to my training dataset and then just computing, doing my cross-validation, essentially just computing my performance metrics. All right, so let’s go ahead and run this. It might take a little bit of time. We’ll see that we’ve got one run that’s been added to an experiment in MLflow. If you’re familiar with MLflow, it’s a pretty helpful MLOps framework to do experiment tracking, model deployment and so on and so forth.

So this is what I’m leveraging here. So now we’re going to train our model and the output of this is just going to be a trained model object, right? So what we’ll want is we’ll want to take this model object that we have, our pipeline and then deploy that as an API, but before we do that, we actually have to make sure that the features that I defined earlier in my notebook are going to be available to this model endpoint in production, right? We need to make sure that these feature values are being computed, that they’re fresh, that they’re written to a low-latency key value store, so that I can retrieve them. I need to make sure that any real-time transformation can be applied when trying to retrieve my features and so on and so forth.

And so while we’ve done all of the deployment and testing directly in our Databricks notebook in an interactive way, this is actually not the way that you deploy features with Tecton. Tecton has more of a DevOps workflow or Git-based workflow to defining and deploying features. So now what we’re going to do is we’re actually switching to our code editor. Here, I’m using VS code and I created this folder apply workshop. And this is what is going to contain our feature definitions that we’re going to then push to Tecton. So here, because we’re starting from scratch, this is an empty repository. I’m going to create a Python file to define my Tecton objects. We’re going to call this apply_definitions.py. And then what I’m going to do inside this file is I’m actually just going to paste all the feature definitions and all the object definitions that I created in my notebook. So here, essentially I’ve pasted all of my definitions from my notebook into this apply_definitions.py file. And this is what I’m going to push the Tecton, so that Tecton creates all of the right resources and jobs to make all of these features available in production.

So obviously here, we only have one single Python file that has about 200 lines. You’re more than welcome to split your feature definition files as you please. So typically, you could split them in by use case or by type of object and so on and so forth. Here, in the interest of time, we’re just keeping everything inside a single file. So now, what we’ll need to do is we’ll need to actually push this feature definition to Tecton, so that Tecton can spin up all of the data pipelines, start the Spark structure streaming job to compute my streaming features and so on and so forth.

So what I’ll do here is I’m connected to my Tecton instance and I have a command line, a client that I’m using to interact with Tecton. And what I’m going to do is Tecton workspace create and then we’re going to call that apply risk. And this is going to be a live workspace because we’re going to need to pull features from this workspace in production. So here, I’m creating a new live workspace and this workspace is actually empty right now. So what I want to do is I want to push all these feature definitions to this workspace. And so for those of you who are familiar with Terraform, our Tecton CLI has a very similar behavior to Terraform where I could use Tecton plan to actually have Tecton scanned the repository, test any object definition and then create what we call an apply plan. That’s essentially just going to be a plan that will summarize any new object to create, any object to delete, any object to update and so on and so forth.

So here, let’s go ahead and call it Tecton plan. Okay, we didn’t run the Tecton in it. So here, what we can call and let me just do PWD. Okay, let’s call it Tecton in it here. So this is going to initialize our repository. And so now I can call Tecton plan, and then if everything goes right, so you can see that here, Tecton ran some tests. If I had any unit tests on my new feature definitions, Tecton will run them as well as part of this workflow. And here you see that Tecton has created my apply plan. So Tecton has created my apply plan. For example, for any batch feature view, you’ll see that Tecton will launch some backfill jobs based on that feature start name that I define. But right now, it hasn’t done anything. It hasn’t actually pushed these two Tectons.

So what I can call is call Tecton apply, which is going to then run these tests again, but this time, it’s actually going to apply these changes to my new workspace. So here, because I’m impacting production, Tecton outputs are warning, and as you can see here, all done. So we have our new features that I’ve been pushed to Tecton and we have our feature service that is live. So if I go to my Tecton UI right here, and if I search for my apply workspace, you’ll see that I have my apply risk workspace that’s been created. And then here, I can explore the UI. What we’ll look at is just this real-time fraud service, which is what we’re going to use to expose these features out to our model.

So here, just looking at our pipeline, what we can see is just a visual representation of everything we’ve done so far. We’ve got to the left our batch data source, our streaming data source and then here our real time request data source. And then in the green, you have the different features that we’ve defined. And as you can see, these features are going to be made available for my offline models through our Tecton SDK and then they’re going to be made available through an API endpoint for my model inference service. So here, what we have now is we actually have a live API endpoint that we can call to retrieve some features from Tecton.

So what we’ll do here is we’ll just show a quick example, so hopefully everyone can see my terminal here, but I’m just going to call a quick curl post to my Tecton API just to retrieve some features directly from Tecton. Okay, so I essentially just sent an API call to my feature service endpoint and Tecton returned my feature vector with all the different feature values that had been computed in the offline store and then also any on-demand feature that’s been computed in real time. So we have about 10 minutes left and what we have right now is we have our feature retrieval service essentially. We have an API that we can hit from a model inference service to retrieve some feature vectors at low latency.

So what we can do now is try to put everything together and actually deploy our model and make sure that within our deployment code, we’re going to add in the call to Tecton to retrieve some features. So what I did here is I essentially created a custom MLflow model. So let me make this slightly bigger, but we created an MLflow pyfunc model, which is essentially allowing you to customize your own MLflow model. And within this new class that I created my Tecton model, I have a predict method where I can essentially define my model inference code.

So within my model inference code, what I’m doing here is I’m actually calling Tecton to retrieve and enrich my input query with some features from Tecton and then call my pipeline predict proba function to retrieve an output prediction. So here, essentially within my MLflow model, I’m adding in a call to Tecton to retrieve some features from Tecton on the fly and then feed these to my secondary pipeline to predict my probability of fraud. Now obviously, you don’t have to package the model and then the feature retrieval code together. You can have the code to Tecton happen in a separate microservice and then that model inference API called this microservice. It’s really up to you.

So here, we’re essentially just saving all of that. So we’re running everything, we’re saving all of that and we created another MLflow run. So if I click on my run, we’ll see that I have my new custom MLflow model that’s been logged. So we have our model, [inaudible 00:52:26] file and so on and so forth. And what I want to do now is I want deploy this model out to production, right? I want to make some live rest API calls to this model. And Databricks makes it pretty easy to get a real-time endpoint from a model that you just logged to MLflow. So what I can do here is click on register model and then here we’re going to register this as a new model version.

So I had already created a model in Databricks, events apply, and we’re going to register this as a V2 of our model. So our model’s being registered, and once that’s done, what we can do is click on our model version. So we’ve got our model that’s been logged, and then here, we can use Databricks is deployment capabilities to actually stand up an API endpoint that will serve the predictions from this model. So here, let’s call use model for inference. In this case, we’re actually wanting to do real-time inference. We’re going to select the version two of our model. Our endpoint is going to be apply risk workshop. And then we’re really just going to do here we are really just experimenting. We don’t want to burn too many DBUs for this workshop, so we’ll just use a small machine.

So let’s just go ahead and create the endpoint. So what Databricks is actually doing here is it’s creating the docker images and standing up the API endpoint for me. And once this API endpoint is live, I’ll be able to hit the endpoint using this URL right here. So to put everything together, I’ve actually already deployed another endpoint because this might take a few minutes and I know we’re running out of time, but I’ve already deployed an endpoint. So what we’ll do is we’ll just go ahead and make a curl call here inside my terminal, just show you that this endpoint is live.

So we’re passing in some inputs, my user ID and then my amount and we’re calling our live Databricks model API. So here, I’m calling this because my API endpoint scales to zero nodes. It’s going to take you a little bit of time to return my prediction, but we’ll see that I’ll be able to send in some more requests. All right, so essentially what we have here is we have a live Databricks model inference endpoint that fetches features from Tecton in real time to enrich a query with some features and then pass that into our model and return a prediction.

Oh, there you go. So here, we’ve got our predictions. I can call this again, see if it will be faster. Yes, it is faster. So now we’ve got our prediction, so we’ll see our probabilities. This looks like this might not be a fraud. And then here, as additional context, I’m returning the feature vector that Tecton returns. So this is either the features that we fetch from Tecton and that Tecton computed in real time. All right, so we have about four minutes left. And essentially what we’ve done today in this quick workshop is we’ve gone through a full example of developing features, training a model with these features, deploying these features out to productions that they could be consumed in real time. We’ve actually created a model that can do some lookups to Tecton in real time. And we’ve exposed this model as a live API.

So now we pretty much have a live fraud detection system that can be connected to our point of sale application. So for those of you who stuck around up until this point, thanks for joining. I’m going to be looking at the Slack and see if there are any questions, but feel free to ping me with any questions you had. Hopefully, I showed you how Tecton can really simplify this workflow and make that a lot faster eventually, especially for fraud detection use cases. It’s pretty important that you’re able to iterate very quickly on your model. As fraudster come up with new fraud tactics very frequently, it’s pretty critical that you can adjust as fast as they do.

So having a feature platform like Tecton that really allows you to speed up the iteration time for defining and then deploying new features can really be helpful specifically for these fraud and risk use cases. All right, so I’m going to be stop sharing my screen. Oh, D, you’re still around.

Speaker 2:

I wouldn’t miss it for the world. Man, are you kidding me?

Vince Houdebine:

All right, should we take a few questions?

Speaker 2:

Well, I haven’t seen anything come through, but I mean we can chit-chat. It looks like people are being … They’re being active. I see some people typing, so might as well hang out for a minute and wait for the questions to roll in. Now we’ve got … Do we got anything coming through here? All right. I think it’s just people asking if they can get the links to go through this notebook, all that good stuff.

Vince Houdebine:

Yeah, and Nick brought up a really good point in the chat. Obviously, this was a very short session. If you guys want to get your hands on Tecton and actually run through this example and have a live real-time fraud detection system, we’re going to be running some interactive hands-on workshops in the coming weeks, so be on the lookout for the invites.

Speaker 2:

Awesome, dude. Sweet. Well, I think that this has been very special and I appreciate you teaching us all about Tecton and the strengths that it has. And with that, it is super late where I’m at, so I’m going to have to sign off and say goodnight.

 

Vincent Houdebine

Senior Solutions Architect

Tecton

Vince is a solutions architect at Tecton, where he is helping customers improve their Operational ML data stack and deliver production ML applications with Tecton. Prior to Tecton, Vince was a senior data scientist at Dataiku where he delivered a variety of production ML applications for Dataiku customers, including a real-time warranty fraud detection model for a leading personal computer and printer manufacturer.

Request a Demo

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button