Tecton

Help us build the future of enterprise ML

careers-hero@2x

Software Engineer, Data Infrastructure

United States, Remote

About Tecton

At Tecton, we are building an enterprise Feature Store that is transforming the way companies solve real-world problems with machine learning at scale. Our founding team created Uber's Michelangelo ML Platform, which has become the blueprint for modern ML platforms in large organizations. We recently received Series B funding from Sequoia Capital and Andreessen Horowitz, have paying enterprise customers, and have growing teams in SF and NYC. The team has years of experience building and operating business-critical machine learning systems at scale at places like Uber, Google, Facebook, Airbnb, Twitter, and Quora.

Tecton's ability to scale and process high volumes of data while being performant and resilient to failures is a key component of the product and central to design decisions. Our team’s data culture is driven by engineers who have worked on major projects such as Google Search and Indexing, Apache Airflow, and Instagram's ML platform.

As an early member of Tecton's Data Infrastructure team, you will help lay the foundation for scaling Tecton. We are looking for exceptional software engineers with a systematic problem-solving approach who are driven to find simple solutions to complex challenges. 

This position is open to candidates based anywhere in the United States. You have the opportunity to work in one of our hub offices in San Francisco or New York City, or to work fully-remote from outside those areas within the US.

Role & Responsibilities

  • Designing, building and maintaining our real-time, streaming and batch data pipelines
  • Optimizing the end-to-end performance of our distributed systems
  • Improving real-time stream compute capabilities
  • Building the Spark/Kafka/Flink ecosystem
  • Pioneer new approaches to data pipelines and workflow orchestration
  • Build and maintain scalable and reliable storage/compute services to serve our growing customer list
  • Automated capacity management and tracking

Preferred Qualifications

  • BS/MS/PhD in Computer Science or a related field
  • 4+ years of professional software engineering experience
  • Experience with building large scale, distributed data pipelines and data applications
  • Experience with building batch or streaming machine learning inference pipelines
  • Experience with Spark, Kafka, Flink and similar tools
  • Experience with cloud technologies, e.g. AWS, GCP, Kubernetes
  • Experience with open-source and commercial products in the data, MLOps and cloud infrastructure space
Apply Now

Didn’t find the right position for you?

Contact us for future opportunities

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​