In 2018 we launched an experiment to add machine learning to the ranking algorithms on the social feed of the Cookpad application. The results of this experiment were plausible for our users, however the architecture we built for this experiment did not allow us to scale beyond a limited number of users. Therefore, in our next iteration, we focused on redesigning the architecture to scale to our global user base keeping in mind all the learnings from our first experiment.
In this talk we will discuss why a feature store is essential for serving machine learning at scale. We will describe the feature store solution we have built, its architecture and the pipelines populating the feature store. Finally, we will discuss the optimisations made to our feature store in order to serve data for online inference in our production environment.