Position Description
Our Staff Engineer is a lead member of the engineering staff working across the organization to provide a friction-less experience to our customers and maintain the highest standards of protection and availability.
Our team thrives and succeeds in building and delivering high-quality technology products and services while influencing best practices in a hyper-growth environment as priorities evolve.
The ideal candidate has broad and deep technical knowledge, typically ranging from managing backend resources to System reliability and all points in between.
They will have advance experience and deep expertise in Platform and Data Engineering to build and manage a Finance Data Lake with multiple edge source integrations from the ground up.
Position Responsibilities
- Take ownership and proactively drive execution and management of end-to-end Data Lakehouse for Finance Data
- Focus on multiple areas and provide leadership to the engineering teams
- Own complete solution across its entire life cycle
- Influence and build vision with engineering leadership, team members, customers, and other engineering teams to solve complex problems for building enterprise-class business applications
- Accountable for the automation, quality, usability, and performance of the solutions
- Lead in design sessions and code reviews to elevate the quality of engineering across the organization
- Utilize programming languages like Python, SQL, and NoSQL databases, Container Orchestration services including Terraform, Docker and Kubernetes, and a variety of Azure tools and services to build an Event Driven Big Data Streaming platform for an ELT data pipeline
- Mentor more junior team members professionally to help them realize their full potential
- Consistently share best practices and improve processes within and across teams
Qualifications
- Fluency and specialization with at least two modern languages such as Java, or Python including other object-oriented design and PowerShell scripting
- Experience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systems
- Experience with Event driven Big Data streaming infrastructure and ETL / ELT frameworks (e.g., Spark Streaming, Flink, Kafka, Hive, Hadoop, Airflow, etc.)
- Experience with deploying highly robust and scalable data pipelines processing petabytes of data
- Experience working with Hadoop, SQL, No-SQL platforms
- Experience with various file formats such as Iceberg, Avro, JSON, and Parquet
- Fluency with DevOps concepts, Containerization, Test Automation, CI / CD, and Infrastructure as code like GitHub, Kubernetes, Docker, Terraform, Helm, Ansible, Chef, etc.
- Experience with the Azure Ecosystem Azure DevOps, Azure Data Lake, Azure Data Factory, Azure Databricks, Azure Storage
- Experience with observability and monitoring platforms for telemetry, alerts, monitoring, SLA and SLOs with Grafana, Azure Monitoring, AppInsights, Dynatrace, or equivalents
- Experience with performance tuning with applications processing large amounts of data
- Experience with Load Testing and Quality Assurance
- Strong verbal and written communication skills
Experience
- 6+ years of professional experience in data software development, programming languages and developing with big data technologies
- 4+ years of experience in open-source frameworks
- 3+ years of experience with architecture and design
- 3+ years of experience with AWS, GCP, Azure, or another cloud service
Education
Bachelor’s degree in Computer Science, Information Systems, or equivalent education or work experience
All potential applicants are encouraged to scroll through and read the complete job description before applying.
J-18808-Ljbffr
Remote working / work at home options are available for this role.