Search jobs > San Jose, CA > Compiler engineer

Senior Deep Learning Compiler Engineer (NPU)

Advanced Micro Devices, Inc
San Jose, California, United States
Full-time

WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world.

Our mission is to build great products that accelerate next-generation computing experiences the building blocks for the data center, artificial intelligence, PCs, gaming and embedded.

Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges.

We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance The Role IREE is an open source, MLIR based, compilation stack that supports compilation of ML models on multiple target architectures.

For many of these architectures, like x86, ARM, RISC-V, as well as some NPUs, LLVM compilation is the last layer of the compilation stack.

In this role, the candidate will be expected to enhance the LLVM compilation on current and future AMD NPU devices. The role is expected to be central to be able to achieve good performance on these device using IREE and will have a direct impact on effective deployment of ML models on such hardware form factors.

The person This role is ideal for someone who has experience with LLVM; knows / is interested in learning the best way to achieve good performance on given architecture.

This person must be able to understand the current MLIR / LLVM based compilation flow, to effectively identify opportunities for optimization at various levels of the stack.

They must be able to design and implement these optimizations either in LLVM or in MLIR, to optimize the binary generated by the compiler.

The person must also enjoy working in open-source projects like MLIR, LLVM and IREE and be able to engage with the community effectively.

This role is ideal for someone who might be new to MLIR but is interested in contributing to it. Key responsibilities Support and contribute to AMD NPU backend compilation in LLVM.

Understand current and upcoming architecture features on AMD NPUs and help design the compiler strategy to target these features effectively within IREE.

Plan for and design compiler transformations in MLIR or LLVM that are needed to generate efficient code. Contribute to and engage with open-source communities in LLVM, MLIR and IREE.

Maintain high level of code quality and testing. Preferred experience Bachelor’s, Master’s or PhD in computer science or related field.

Multiple of experience working with an LLVM based compiler, MLIR experience optional Known history of contribution to open-source projects is preferred Prior experience in ML compilers is optional but preferred.

Experience with fuzzers and reducers is a plus. Location San Jose, Seattle #LI-EM1 At AMD, your base pay is one part of your total rewards package.

Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position.

You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan.

You’ll also be eligible for competitive benefits described in more detail here. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.

AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and / or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.

We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

The Role IREE is an open source, MLIR based, compilation stack that supports compilation of ML models on multiple target architectures.

For many of these architectures, like x86, ARM, RISC-V, as well as some NPUs, LLVM compilation is the last layer of the compilation stack.

In this role, the candidate will be expected to enhance the LLVM compilation on current and future AMD NPU devices. The role is expected to be central to be able to achieve good performance on these device using IREE and will have a direct impact on effective deployment of ML models on such hardware form factors.

The person This role is ideal for someone who has experience with LLVM; knows / is interested in learning the best way to achieve good performance on given architecture.

This person must be able to understand the current MLIR / LLVM based compilation flow, to effectively identify opportunities for optimization at various levels of the stack.

They must be able to design and implement these optimizations either in LLVM or in MLIR, to optimize the binary generated by the compiler.

The person must also enjoy working in open-source projects like MLIR, LLVM and IREE and be able to engage with the community effectively.

This role is ideal for someone who might be new to MLIR but is interested in contributing to it. Key responsibilities Support and contribute to AMD NPU backend compilation in LLVM.

Understand current and upcoming architecture features on AMD NPUs and help design the compiler strategy to target these features effectively within IREE.

Plan for and design compiler transformations in MLIR or LLVM that are needed to generate efficient code. Contribute to and engage with open-source communities in LLVM, MLIR and IREE.

Maintain high level of code quality and testing. Preferred experience Bachelor’s, Master’s or PhD in computer science or related field.

Multiple of experience working with an LLVM based compiler, MLIR experience optional Known history of contribution to open-source projects is preferred Prior experience in ML compilers is optional but preferred.

Experience with fuzzers and reducers is a plus. Location San Jose, Seattle #LI-EM1At AMD, your base pay is one part of your total rewards package.

Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position.

You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan.

You’ll also be eligible for competitive benefits described in more detail here. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.

AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and / or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.

We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

30+ days ago
Related jobs
Promoted
SmartNews
Palo Alto, California

We combine the power of our unique machine-learning technology with the expertise of our first-rate editorial team to bring news that matters to millions of users from our over 3,000 global publishing partners. By applying a blend of engineering expertise and business insights, we strive to enhance ...

Advanced Micro Devices, Inc
San Jose, California

AMD together we advance_ The Role: We are looking for a talented Machine Learning (ML) Compiler SW Engineer to join our growing team in the AI group and play a crucial role in developing SW toolset to deploy cutting-edge ML models on AMD’s XDNA Neural Processing Units (NPU). The Role: We are looking...

Promoted
Storm4
CA, United States

Develop custom AI solutions as needed, utilizing advanced techniques such as machine learning, deep learning, natural language processing, and computer vision. Extensive knowledge of machine learning algorithms, deep learning frameworks (e. This is an amazing opportunity for a AI / ML Engineer to he...

NVIDIA
Santa Clara, California

NVIDIA is hiring software engineers for its Deep Learning Frameworks Sustaining Engineering team. Our team produces software that's powering a revolution in deep learning, enabling breakthroughs in problems from image classification to speech recognition to natural language processing! Help us build...

Apple
Sunnyvale, California

In this position, you will join a team of computer vision and machine learning researchers and engineers to discover and build solutions to previously-unsolved challenges and push the state of the art. We are looking for a driven and dedicated computer vision/machine learning engineer or researcher,...

TikTok
San Jose, California

We use state-of-the-art large-scale machine learning technology, the cutting-edge NLP, CV and multi-modal technology to build the industry's top-class search engine to provide the best e-commerce search experience, for more than 1 billion monthly active TikTok users around the world. Improve users' ...

Annapurna Labs (U.S.) Inc.
Cupertino, California

You: Machine Learning Compiler Engineer II on the AWS Neuron team, you will be supporting the ground-up development and scaling of a compiler to handle the world's largest ML workloads. More specifically, the AWS Neuron team is developing a deep learning compiler stack that takes neural network desc...

TikTok
San Jose, California

Experience in one or more of the following areas: applied machine learning, machine learning infrastructure, large-scale recommendation system, market-facing machine learning product;. The team is made up of machine learning researchers and engineers, who support and innovate on production recommend...

Apple
Cupertino, California

Our team is seeking extraordinary machine learning engineers who are passionate about creating machine learning driven user experiences. You will be working alongside highly accomplished and deeply technical scientists and engineers to advance the state of the art in AI-driven products. Work include...

Geomagical Labs
Palo Alto, California
Remote

We have an opening in our lab for a senior computer vision researcher, with 3D Reconstruction and Deep Learning expertise, to develop and improve the underlying algorithms powering our consumer products. Master's and 6+ years of experience, focused on 3D Computer Vision and Deep Learning. Experience...