ML Engineer IC3

Location

San Francisco Bay Area, California, United States

Salary

57500 - 92500 a year (US Dollars)

Description

Location

🌎 While we are an all-remote company and hire almost anywhere in the world, we have a preference for someone to reside in the following locations for this role. However, if you feel qualified, we welcome you to apply regardless of location. No matter what, working hours must overlap with PST for at least 20 hours/week.

Preferred locations:

  • Hybrid - San Francisco

Why this job is exciting

We recently created a machine learning team at Sourcegraph, aimed at creating the most powerful coding assistant in the world. Many companies are trying, but Sourcegraph is uniquely differentiated by our rich code intelligence data and powerful code search platform. In the world of prompting LLMs, context is everything, and Sourcegraph’s context is simply the best you can get: IDE-quality, global-scale, and served lightning fast. Our code intelligence, married with modern AI, is already providing a remarkable alpha experience, and you can help us unlock its full potential.

We are looking for an experienced full-stack ML engineer with demonstrated industry experience in productionizing large-scale ML models in industrial settings. And if you happen to have an entrepreneurial streak, you’re in luck:  We have an enterprise distribution pipeline, so whatever you build can be deployed straight to enterprise customers with some of the largest code bases in the world, without all the go-to-market hassle you’d encounter in a startup.

You will be an engineer at Sourcegraph doing R&D, and pushing the boundaries of what AI can do, as an IC on our ML team. You will have the full power of Sourcegraph’s Code Intelligence Platform at your disposal, and you’ll be working on a coding assistant to multiply dev productivity to unprecedented levels.

📅 Within one month, you will…

  • Start building a trusting relationship with your peers, and learning the company structure.
  • Be set up to do local development, and be actively prototyping.
  • Dive deep into how AI and ML is already used at Sourcegraph and identify ways to improve moving forward.
  • Develop simulated datasets using Gym style frameworks across a number of Cody use cases.
  • Experiment with changes to Cody prompts, context sources and evaluate the changes with offline experimentation datasets.
  • Ship a substantial new feature to end users.

📅 Within three months, you will…

  • Building out feature computation, storage, monitoring, analysis and serving systems for features required across our Cody LLM stack
  • Be contributing actively to the world’s best coding assistant.
  • Developing distributed training & experiment infrastructure over Code AI datasets, and scaling distributed backend services to reliably support high-QPS low latency use cases.
  • Be following all the relevant research, and conducting research of your own.

📅 Within six months, you will…

  • Be fully ramped up and owning key pieces of the assistant.
  • Be ramped up on other relevant parts of the Sourcegraph product.
  • Be helping design and build what might become the biggest dev accelerator in 20 years.
  • Owning a number of ML systems, and building core data and model metadata systems powering the end-to-end ML lifecycle.
  • Be developing a highly scalable, high-QPS inference service providing low latency performance using a mix of CPU and GPU hardware to most efficiently utilize resources.
  • Be driving the technical vision and owning a couple of major ML components, including their modeling and ML infra roadmap.

About you

You are an experienced full-stack ML engineer with demonstrated industry experience in formulating ML solutions, developing end-to-end data orchestration pipelines, deploying large-scale ML models, and experimenting offline and online to drive business impact for Cody users. You want to be part of a world-class team to push the boundaries of AI, with a particular focus on leveraging Sourcegraph’s code intelligence to leapfrog competitors.

  • You have 5-8 years of industry experience
  • You are a backend focused ML engineer who has worked on the entire ML lifecycle
  • You have deployed ML models to production to users and have developed feature pipelines
  • You understand the nuances of ML for users to move metrics forward

Your working hours overlap with 8am-4pm PT for at least 20 hours per week so we have time to collaborate synchronously when necessary.

Level

📊 This job is an IC3. You can read more about our job leveling philosophy in our Handbook.

Compensation

💸 We pay you an above-average salary because we want to hire the best people who are fully focused on helping Sourcegraph succeed, not worried about paying bills. As an open and transparent company that values competitive compensation, our compensation ranges are visible to every single Sourcegraph teammate.

To determine your salary, we use a number of market and data-driven salary sources, along with your location zone, and target the high-end of the range to ensure we’re always paying above market regardless of where you live in the world. Both U.S. and international locations are divided into one of four zones, determined by the cost of labor index for each area. The starting salary for a successful candidate will be based on level, job-related skills, experience, qualifications, and location zone. Please note that these salary ranges may be adjusted in the future.

💰The target compensation for this role is $185,000 USD base.

Please speak with a recruiter for additional information regarding zone locations.

📈 In addition to our cash compensation, we offer equity (because when we succeed as a company, we want you to succeed, too) and generous perks & benefits.

Interview process

Below is the interview process you can expect for this role (you can read more about the types of interviews in our Handbook). It may look like a lot of steps, but rest assured that we move quickly and the steps are designed to help you get the information needed to determine if we’re the right fit for you… Interviewing is a two-way street, after all! 

We expect the interview process to take 5.5 hours in total.

👋 Introduction Stage - we have initial conversations to get to know you better…

🧑‍💻 Team Interview Stage - we then delve into your experience in more depth and introduce you to members of the team, including cross-functional partners…

  • [60m] ML Depth Interview
  • [60m] ML Breadth & ML Systems
  • [15m + async] Pairing Exercise

🎉 Final Interview Stage - we move you to our final round, where you gain a better understanding of our business and values holistically…

  • [30m] Values
  • [30m] Leadership with co-founder 
  • We check references and conduct your background check

Please note - you are welcome to request additional conversations with anyone you would like to meet, but didn’t get to meet during the interview process.



Please mention the word **HUMBLE** and tag RMTUxLjgwLjE0My4yMDY= when applying to show you read the job post completely (#RMTUxLjgwLjE0My4yMDY=). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Job type:

Remote job

Tags

  • design
  • training
  • full-stack
  • technical
  • recruiter
  • support
  • code
  • assistant
  • engineer
  • backend
  • digital nomad
Sent 94 days ago
Back to index