Join our pioneering team as an AI / ML Performance Engineer for LLM Acceleration , where your expertise will help shape the future of artificial general intelligence (AGI) through cutting-edge hardware solutions.
Our mission is to develop the premier compute platform for AGI by creating hardware that enhances AI's speed, efficiency, and affordability.
Unlike traditional approaches that generalize across all ML models, our focus is exclusively on large language models (LLMs), enabling our hardware and software to achieve unparalleled simplicity and performance.
About Our Mission :
We are dedicated to revolutionizing AGI by focusing solely on LLMs, recognizing their unique requirements compared to other ML models.
Our approach allows us to streamline our hardware and software, making AI technologies faster, better, and more cost-effective.
By concentrating on LLMs, we offer optimized solutions that outperform generic platforms, setting new standards in the AI field.
About the Role :
Your mission will be to drive optimizations that enable LLMs to operate seamlessly and efficiently on cutting-edge hardware.
You'll tackle challenges involving quality evaluations, the development of distributed infrastructure for scalable training and inference, and strategic advisories on hardware optimizations for ML applications.
What You'll Get :
- A highly competitive salary range
- An inclusive, supportive workplace culture
- The chance to impact the future of AI and ML technology
Key Responsibilities :
- Enhance LLM performance through hardware optimizations
- Lead quality evaluations and infrastructure development for ML scalability
- Advise on hardware design from an ML optimization perspective
We're Looking For :
- Expertise in software engineering and ML model optimization
- Experience with neural network training, particularly LLMs
- Insight into optimizing neural networks for hardware efficiency
This role, based in Mountain View and requiring a hybrid work model, is perfect for those ready to influence the integration of AI and ML with hardware acceleration.