Senior Software Engineer Backend / Distributed Systems
Senior Software Engineer with backend / distributed systems / large scale and database / data processing or data streaming centric background and coding skills in at least one of Go, Golang, Python, Java or C++ is required by a respected and well-funded start-up in San Francisco in the cloud data security space to work in product engineering and deliver a massive scale platform reliably, securely.
To work with founding engineers from Amazon, Uber, Google, Stripe and Datadog etc.
Submit your CV and any additional required information after you have read this description by clicking on the application button.
Salary : Circa $170-200k + Equity + Benefits + Health, Dental, Vision, 401K, Paid Leave, Fitness, Mental Health, Paternity Leave (H1B visa transfer considered)
Location : Hybrid role, 2 days per week in the San Francisco Office
This Senior Software Engineer will flex their skills on complex platform architecture, coding, and reliability on a platform with low latency, real-time microservices and streaming data to process and protect substantial data volumes and millions of users.
Primary technologies used include : Go, Python, Linux, Kafka, AWS, Redis, Docker, Kubernetes, Cassandra, Terraform, Envoy, Node.JS
Minimum Requirements :
- Strong coding ability in at least one of Go / Golang OR Python OR Java OR C++ with a Polyglot mindset
- Experience of working in a fast-paced start-up environment
- Strong skills in backend systems and scale such as distributed systems / event-driven, low latency or multi-threading or machine learning
- Proven experience running systems at scale with thousands of transactions TPS / requests per second RPS and reliability engineering
- Experience developing complex software systems scaling to substantial data volumes or millions of users with production quality deployment, monitoring and reliability.
- Experience in integrating with third-party APIs
- Data Processing - experience with building and maintaining large-scale and / or real-time complex data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper
- Experience with large-scale distributed storage and database systems (SQL or NoSQL, MySQL, Cassandra)
- Ability to solve complex business problems and play a senior role in a team in solving them
Key Responsibilities :
- Work in product-centric engineering to architect low latency / distributed systems and real-time microservices
- Building highly-available and secure authentication and API services
- Optimizing and operating high-volume auto-scaling streaming data services
- Maintaining and evolving mission-critical internal databases and services
- Instrumenting streaming data services for visibility into utilization per customer
- Writing and maintaining documentation about internal and public services
- Working with product and other stakeholders to define features
This is an outstanding chance to flex your skills and develop your career within a fast-growing and well-funded start-up of around 200 people in collaboration with VC and backing from tech giants.
Opus Resourcing acts as a recruitment agency and helps a number of world-leading brands to source the best possible talent.
J-18808-Ljbffr