(#6335684003) Intern, Architecture Research Engineer
What You’ll Learn
- Project Overview: H/W, S/W co-design of AI accelerators
- Skills You’ll Learn:
- Microarchitecture of AI accelerators
- Design space exploration of AI accelerators
What You’ll Do
The AGI (Artificial General Intelligence) Computing Lab is dedicated to solving the complex system-level challenges posed by the growing demands of future AI/ML workloads. Our team is committed to designing and developing scalable platforms that can effectively handle the computational and memory requirements of these workloads while minimizing energy consumption and maximizing performance. To achieve this goal, we collaborate closely with both hardware and software engineers to identify and address the unique challenges posed by AI/ML workloads and to explore new computing abstractions that can provide a better balance between the hardware and software components of our systems. Additionally, we continuously conduct research and development in emerging technologies and trends across memory, computing, interconnect, and AI/ML, ensuring that our platforms are always equipped to handle the most demanding workloads of the future. By working together as a dedicated and passionate team, we aim to revolutionize the way AI/ML applications are deployed and executed, ultimately contributing to the advancement of AGI in an affordable and sustainable manner. Join us in our passion to shape the future of computing!
Location: Hybrid, working onsite at our San Jose, CA headquarters 3 days per week, with the flexibility to work remotely the remainder of your time
Reports to: Senior Principal Engineer, AI/ML Computer Architecture
- Research novel architectures for high-performance deep learning applications.
- Research design space exploration methodology and pruning for efficient architecture exploration.
- Design algorithms and microarchitectural features to optimize data locality to minimize energy consumption.
- Work closely with the compiler team to integrate new ML techniques and algorithms into the compiler.
- Collaborate with cross-functional teams to define and deliver discover and implement microarchitecture features and improvements.
- Contribute to ML architecture research and write research papers.
- Stay up-to-date with the latest trends and advancements in the field of ML architectures.
- Complete other responsibilities as assigned.
What You Bring
- Pursuing Masters, or PhD in Computer Science or Electrical Engineering preferred.
- Research experience with hardware architectures such as CPUs, GPUs, TPUs, and NPUs.
- Published papers in Microarchitecture conferences preferred.
- Experiences in developing simulators for high-performance computing systems
- Understanding of system level characteristics of LLM, DLRM and CNN and other ML workloads.
- Familiarity with PyTorch, Tensorflow, or JAX.
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
- You’re inclusive, adapting your style to the situation and diverse global norms of our people.
- An avid learner, you approach challenges with curiosity and resilience, seeking data to help build understanding.
- You’re collaborative, building relationships, humbly offering support and openly welcoming approaches.
- Innovative and creative, you proactively explore new ideas and adapt quickly to change.
#LI-AD1