The AWS Neuron Compiler team is actively seeking skilled compiler engineers to join our efforts in developing a state-of-the-art deep learning compiler stack.
This stack is designed to optimize application models across diverse domains, including Large Language and Vision, originating from leading frameworks such as PyTorch, TensorFlow, and JAX.
Your role will involve working closely with our custom-built Machine Learning accelerators, including Inferentia / Trainium, which represent the forefront of AWS innovation for advanced ML capabilities, powering solutions like Generative AI.
Key job responsibilities
As a Sr. ML Compiler Engineer III on the Neuron Compiler Automated Reasoning Group, you will develop and maintain tooling for best-in-class technology for raising the bar of the Neuron Compiler's accuracy and reliability.
You will help lead the efforts building fuzzers and specification synthesis tooling for our LLVM-based compiler. You will work in a team with a science focus, and strive to push what we do to the edge of what is known, to best deliver our customers.
Strong software development skills using C++ / Python are critical to this role.
A science background in compiler development is strongly preferred. A background in Machine Learning and AI accelerators is preferred, but not required.
In order to be considered for this role, candidates must be currently located or willing to relocate to Seattle (Preferred), Cupertino, Austin, or Toronto.
BASIC QUALIFICATIONS
- 6+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- 5+ years of experience in developing compiler features and optimizations
- Proficiency in C++ and Python programming, applied to compiler or verification projects
- Familiarity with LLVM, including knowledge of abstract interpretation and polyhedral domains
- Demonstrated scientific approach to software engineering problems
PREFERRED QUALIFICATIONS
- Masters degree or PhD in computer science or equivalent
- Experience with deep learning frameworks like TensorFlow or PyTorch
- Understanding of large language model (LLM) training processes
- Knowledge of CUDA programming for GPU acceleration
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
For individuals with disabilities who would like to request an accommodation, please visit https : / / www.amazon.jobs / en / disability / us.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300 / year in our lowest geographic market up to $261,500 / year in our highest geographic market.
Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and / or other benefits.
For more information, please visit https : / / www.aboutamazon.com / workplace / employee-benefits. This position will remain posted until filled.
Applicants should apply via our internal or external career site.