About Foundation :
Foundation is developing the future of general purpose robotics and electric mobility with the goal to save lives and augment human labor.
Our mission is to create an autonomous, crewless ATV built to be truly all-terrain. Designed for commercial and defense use, it can save lives and take on heavy transport across mining, agriculture, construction, and more.
We are on the lookout for extraordinary engineers and scientists to join our team. Your previous experience in robotics isn't a prerequisite it's your talent and determination that truly count.
We expect that many of our team members will bring diverse perspectives from various industries and fields. We are looking for individuals with a proven record of exceptional ability and a history of creating things that work.
Our Culture :
We like to be frank and honest about who we are, so that people can decide for themselves if this is a culture they resonate with. Please read more about our culture here https : / / foundation.bot / culture .
Who should join :
- You deeply believe that this is the most important mission for humanity and needs to happen yesterday.
- You are highly technical - regardless of the role you are in. We are building technology; you need to understand technology well.
- You care about aesthetics and design inside out. If it's not the best product ever, it bothers you, and you need to fix it.
- You don't need someone to motivate you; you get things done .
Overview :
We're seeking a Perception Engineer to develop the
Why We Are Hiring for This Role :
Develop perception pipelines using LiDAR, cameras, and GPS / IMU for off-road environments.Implement algorithms for localization, mapping (SLAM), and terrain / obstacle detection.Build systems that allow the ATV to recognize trails, drivable areas, and hazards.Work closely with the planning engineer to provide reliable world models for navigation.Validate perception stack in both simulation and real-world field tests.Calibrate and synchronize multiple sensors (LiDAR, cameras, IMU, GPS) for reliable perception.Optimize perception algorithms for real-time performance on embedded compute hardware.Develop tools for visualizing and debugging sensor data during field testing.Integrate learning-based perception models (e.g., segmentation, object detection, drivable area inference) into the autonomy stack.Establish metrics and regression frameworks to quantify perception accuracy, latency, and robustness over diverse terrains.What Kind of Person We Are Looking For :
Strong background in computer vision, robotics, or related fields.Proficiency in ROS2, C++, and Python.Experience with LiDAR and camera-based perception, 3D point cloud processing, and SLAM.Familiarity with probabilistic sensor modeling and uncertainty quantification in perception outputs.Experience integrating perception outputs with downstream planning and control (e.g., costmaps, obstacle grids).Proficiency using modern visual-inertial odometry frameworks (e.g., VINS-Fusion, ORB-SLAM3).Comfort deploying and debugging algorithms in outdoor, off-road conditions.Experience fusing data from multiple sensors (camera + LiDAR + IMU).Familiarity with point-cloud libraries (e.g., PCL, Open3D) or vision libraries (OpenCV).Benefits :
We provide market standard benefits (health, vision, dental, 401k, etc.). Join us for the culture and the mission, not for the benefits.Salary :
The annual compensation is expected to be between $100,000 - $1,000,000. Exact compensation may vary based on skills, experience, and location.PIc51e0c0c872e-30511-38691226