Search jobs > San Jose, CA > Principal software architect

Principal AI/ML Software Architect

Advanced Micro Devices, Inc
San Jose, California, United States
Full-time

WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world.

Our mission is to build great products that accelerate next-generation computing experiences the building blocks for the data center, artificial intelligence, PCs, gaming and embedded.

Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges.

We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance THE ROLE : AMD is looking for an AI / ML software architect who is passionate about improving the performance of key Machine Learning applications and benchmarks on NPU.

You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology.

THE PERSON : We are looking for a dynamic, energetic software architect join our growing team in AI group. As an ML software stack architect, you will be responsible for architecting runtime stack, defining the operator mapping and dataflow, operator mapping and scheduling on AMD’s XDNA Neural Processing Units that power cutting edge generative models like Stable diffusion, SDXL-Turbo, Llama2, etc.

Your work will directly impact the efficiency, scalability, and reliability of our ML applications. If you thrive in a fast-paced environment and love working on cutting edge machine learning inference, this role is for you.

ommunicate effectively and work optimally with different teams across AMD. KEY RESPONSIBILITIES : Define software stack that interfaces with open source runtime env like ONNX, PyTorch and NPU compiler Define runtime operator scheduling, memory management, operator dataflow based on tensor residency Propose algorithmic optimization in operators that are mapped to CPU using AVX512 Interface with ONNX / Pytorch runtime engines to deploy the model on CPUs Develop efficient model loading mechanisms to minimize startup latency.

Collaborate with kernel developers to integrate ML operators seamlessly into high level ML frameworks Design and implement C++ runtime wrappers, APIs, and frameworks for ML model execution.

Architect optimized CPU alternative implementation for ML operators that are not supported on NPUs PREFERRED EXPERIENCE : Detailed and thorough understanding of ONNX, PyTorch runtime stack, open source frameworks Strong experience in scheduling operators between NPU, GPU and CPU Experience with graph parsing, operator fusion Strong experience with AVX, AVX512 instruction set, cache behavior in CPU Strong experience with managing system memory Detailed understanding of compiler interfacing with runtime stack, JIT compilation flow Strong programming skills in C++, Python.

Experience with ML frameworks (e.g., TensorFlow, PyTorch) is required. Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must.

Experience with ONNX, Pytorch runtime stacks is a must. Knowledge and parallel computing is a bonus. Familiarity with containerization (Docker, Anaconda, etc) is good to have.

Motivating leader with good interpersonal skills ACADEMIC CREDENTIALS : PhD degree in Computer Science, Computer Engineering, Electrical Engineering Location : San Jose, Ca #LI-JT1 At AMD, your base pay is one part of your total rewards package.

Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position.

You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan.

You’ll also be eligible for competitive benefits described in more detail here. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.

AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and / or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.

We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

THE ROLE : AMD is looking for an AI / ML software architect who is passionate about improving the performance of key Machine Learning applications and benchmarks on NPU.

You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology.

THE PERSON : We are looking for a dynamic, energetic software architect join our growing team in AI group. As an ML software stack architect, you will be responsible for architecting runtime stack, defining the operator mapping and dataflow, operator mapping and scheduling on AMD’s XDNA Neural Processing Units that power cutting edge generative models like Stable diffusion, SDXL-Turbo, Llama2, etc.

Your work will directly impact the efficiency, scalability, and reliability of our ML applications. If you thrive in a fast-paced environment and love working on cutting edge machine learning inference, this role is for you.

ommunicate effectively and work optimally with different teams across AMD. KEY RESPONSIBILITIES : Define software stack that interfaces with open source runtime env like ONNX, PyTorch and NPU compiler Define runtime operator scheduling, memory management, operator dataflow based on tensor residency Propose algorithmic optimization in operators that are mapped to CPU using AVX512 Interface with ONNX / Pytorch runtime engines to deploy the model on CPUs Develop efficient model loading mechanisms to minimize startup latency.

Collaborate with kernel developers to integrate ML operators seamlessly into high level ML frameworks Design and implement C++ runtime wrappers, APIs, and frameworks for ML model execution.

Architect optimized CPU alternative implementation for ML operators that are not supported on NPUs PREFERRED EXPERIENCE : Detailed and thorough understanding of ONNX, PyTorch runtime stack, open source frameworks Strong experience in scheduling operators between NPU, GPU and CPU Experience with graph parsing, operator fusion Strong experience with AVX, AVX512 instruction set, cache behavior in CPU Strong experience with managing system memory Detailed understanding of compiler interfacing with runtime stack, JIT compilation flow Strong programming skills in C++, Python.

Experience with ML frameworks (e.g., TensorFlow, PyTorch) is required. Experience with ML models such as CNN, LSTM, LLMs, Diffusion is a must.

Experience with ONNX, Pytorch runtime stacks is a must. Knowledge and parallel computing is a bonus. Familiarity with containerization (Docker, Anaconda, etc) is good to have.

Motivating leader with good interpersonal skills ACADEMIC CREDENTIALS : PhD degree in Computer Science, Computer Engineering, Electrical Engineering Location : San Jose, Ca #LI-JT1At AMD, your base pay is one part of your total rewards package.

Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position.

You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan.

You’ll also be eligible for competitive benefits described in more detail here. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.

AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and / or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.

We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

30+ days ago
Related jobs
Promoted
Palo Alto Networks
Santa Clara, California

You will participate in design discussions and help make a difference in developing the future direction of our AI/ML software. Design, develop and execute test automation to test software features on our next-generation security platform as part of our AI/ML Strata Cloud Service. Develop and mainta...

Promoted
VirtualVocations
Santa Clara, California

A company is looking for a Principal Software Architect specializing in AI Technology. TensorFlow, PyTorch)Expertise in AI model deployment on cloud platforms (AWS, Azure, GCP)Experience with Kubernetes and GPU architectures. ...

Promoted
Cepheid
Sunnyvale, California

Exceptional knowledge of data architecture principles and proven experience (5+ years) in AI/ML roles with a strong track record of successful AI project implementations end-to-end. The AI Architect will play a critical role in driving innovation and AI adoption across the organization. Ensure AI-ba...

Promoted
The Rundown AI, Inc.
Palo Alto, California

Work with researchers to identify and implement technical data requirements, and optimize distributed loading for model training. Design & build performant infrastructure to manage and leverage large-scale datasets for our model training. Experience working closely with ML is a strong plus. ...

Amazon.com Services LLC
Palo Alto, California

As an engineer on this team you will be working closely with Principal Engineers in CDO and SageMaker to define common ML lifecycle best practices, and to design and build pioneering world-class ML automation features on top of SageMaker for multiple domains and use-cases. Machine Learning lifecycle...

PayPal
San Jose, California

PayPal is committed to fair and equitable compensation practices. ...

Palo Alto Networks
Santa Clara, California

Palo Alto Networks is committed to AI security in the emerging AI era. The AI security cloud service engineering team is the core engineering team to build a solid product to assure the runtime security of our customers when they are using AI especially LLM services. Collaborate with product manager...

Apple
Sunnyvale, California

You will implement robust, scalable ML infrastructure, including data storage, processing, and model serving components, to support seamless integration of AI/ML models into production environments. Would you like to work in a fast-paced environment where your technical abilities will be challenged ...

Oracle
Santa Clara, California

Monitor, troubleshoot, and optimize cloud-based systems to maintain high performance, availability, and cost efficiency. We are looking for hands-on engineers with expertise and passion for solving challenging problems in both AI and cloud service software engineering: design, high-performance virtu...

Palo Alto Networks
Santa Clara, California

We are looking for an exceptional Senior Principal Software Engineer to enhance our AI Runtime Security team. Architect and develop scalable, reliable and efficient cloud services for AI Runtime Security. The ideal candidate will possess a deep understanding of cloud computing, particularly within t...