Technical / Functional Skills
Learn more about the general tasks related to this opportunity below, as well as required skills.
Staff Data Engineering role - someone with hands-on experience with cloud infrastructure, PySpark / Spark (strong), CICD, basic software engineering, data engineering, etc.
This role will be responsible for handling multiple products and multiple stakeholders, so we are looking for a candidate with lots of hands-on experience who enjoys learning and sharing the latest trends in the tech space related to DE.
Tech Stack : Snowflake, Databricks, Python, PySpark, SQL, and Azure.
Experience Required
As a Staff Data Engineer, the candidate will be part of a Data Engineering team that is focused on making critical data available to our business teams.
Using the agile framework, the candidate will build end-to-end pipelines based on rigorous engineering standards and coding practices to deliver data that is accessible and of the highest quality.
A Staff Data Engineer will also contribute to the modernization of our architecture and tools to help increase our output, scalability, and speed.
Roles & Responsibilities
- Lead the Data Engineering Team to develop, test, document, and support scalable data pipelines.
- Build out new data integrations including APIs to support continuing increases in data volume and complexity.
- Establish and follow data governance processes and guidelines to ensure data availability, usability, consistency, integrity, and security.
- Build and implement scalable solutions that align to our data governance standards and architectural road maps for data integrations, data storage, reporting, and analytic solutions.
- Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision-making across the organization.
- Design and develop data integrations and a data quality framework. Write unit / integration / functional tests and document work.
- Design, implement, and automate deployment of our distributed system for collecting and processing streaming events from multiple sources.
- Perform data analysis needed to troubleshoot data-related issues and aid in the resolution of data issues.
- Guide and mentor junior engineers on coding best practices and optimization.
Qualifications
- Education : 4-year college degree or equivalent combination of education and experience. Prefer an academic background in Computer Science, Mathematics, Statistics, or related technical field.
- 8 years of relevant work experience in analytics, data engineering, business intelligence, or related field.
- Skilled in object-oriented programming (Python in particular).
- Strong experience in Python, PySpark, and SQL.
- Strong experience in Databricks & Snowflake.
- Experience developing integrations across multiple systems and APIs.
- Experience with or knowledge of Agile software development methodologies.
- Experience with cloud-based databases, specifically Azure technologies (e.g., Azure Data Lake, ADF, Azure DevOps, and Azure Functions).
- Experience using SQL queries as well as writing and perfecting SQL queries in a business environment with large-scale, complex datasets.
- Experience with data warehouse technologies and creating ETL and / or ELT jobs.
- Excellent problem-solving and troubleshooting skills.
- Process-oriented with great documentation skills.
- Experience designing data schemas and operating SQL / NoSQL database systems is a plus.
J-18808-Ljbffr