Graduate 2026 PhD Software Engineer II (AV Labs), United States
Working at Uber as a Graduate PhD Software Engineer II means taking deep technical expertise in AI, machine learning, and robotics and applying it to high-stakes, real-world autonomous systems. This is not a theoretical exercise; you will be building and deploying production-grade ML systems that operate in complex physical environments, where safety, reliability, and performance directly shape the future of autonomous mobility.
You’ll join AV Labs, a new initiative focused on accelerating the autonomous technology ecosystem by transforming real-world operations into high-quality data and intelligent systems. Our team tackles one of the hardest challenges in autonomy today: unlocking long-tail, real-world driving scenarios. Autonomy is fundamentally a data and systems problem—and Uber brings a unique advantage through its ability to collect rare, high-value data at scale and convert it into actionable intelligence.
As part of this team, you will work at the intersection of machine learning, robotics, and large-scale data systems to develop core components of the autonomous driving stack. This includes perception, prediction, and decision-making systems, as well as the data pipelines and infrastructure that power them. Your work will directly contribute to building safer, more robust autonomous systems capable of operating in the real world.
The pace here is fast, and the problems are deeply complex and multidisciplinary. We are looking for researchers who want to be builders—individuals who can translate cutting-edge research into scalable, production-ready systems. If you are energized by deploying Physical AI in real-world environments and want to own outcomes end-to-end in a high-impact space, this is where you’ll grow.
What you’ll do
- Design, build, and deploy production-grade machine learning systems for autonomous driving applications, including perception, prediction, and decision-making
- Develop and apply advanced techniques in computer vision, deep learning, robotics, and sequential decision-making to handle complex, real-world driving scenarios
- Translate state-of-the-art research into scalable, high-impact solutions for autonomy systems operating in dynamic urban environments
- Build and optimize large-scale data pipelines for sensor data ingestion, processing, and auto-labeling to accelerate model development
- Architect and improve infrastructure for high-throughput training and low-latency inference in safety-critical, real-time systems
- Own your work end-to-end: from problem formulation and modeling to offline evaluation, simulation, production deployment, and continuous iteration
- Identify and solve edge cases and long-tail scenarios to improve system robustness and safety
- Collaborate cross-functionally with engineers across platform, infrastructure, and product teams to deliver integrated autonomy solutions
Champion engineering excellence through code quality, rigorous testing, reproducibility, and system reliability in safety-critical environments
Basic Qualifications
- Completing or recently completed a PhD in Computer Science, Robotics, Machine Learning, Computer Vision, Electrical Engineering, or a related technical field
Preferred Qualifications
- Strong publication record in top-tier AI, ML, robotics, or computer vision conferences
- Deep knowledge of machine learning for robotics, computer vision, or autonomous systems
- Experience working with large-scale sensor data (e.g., camera, LiDAR) and building data pipelines for ML applications
- Strong proficiency in Python and experience with modern ML frameworks such as PyTorch
- Experience developing or deploying ML models in real-world or safety-critical systems
- Familiarity with C++ and high-performance or real-time systems
- Proven ability to translate research into production-grade systems
- Excellent communication skills, with the ability to explain complex technical concepts to cross-functional stakeholders