Job Description
Senior Data Engineer
Provo, Utah (Hybrid)
$160-180K / year
No Corp to Corp or sponsorship
As a Senior Data Engineer focusing on data automation and quality you will be expected to use a highly programmatic approach to solving the company’s data needs.
This includes leveraging python and ANSI SQL to build data pipelines and quality tools using Spark, Databricks DLT pipelines, and other methods on the Databricks platform.
This job is focused less on data architecture / structure and more on building resilient software. As such you should expect to be developing python code daily.
In this role a successful data engineer is one who has the desire to build and maintain environments that run millions of lines of code with few or no failures while maintaining a high degree of accuracy.
Education
Bachelor’s degree (BS) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field, or equivalent industry experience or other math based technical field that requires significant programing course work to complete the degree.
EcEn, Econ, Stats, etc.)
Desired 2 Year master’s degree (MS) in computer science or other math based technical field that requires significant programing.
Experience
- 3-5 years As a technical data engineering experience specifically focusing on automation and quality using python and spark based technologies to process and automate data.
- 3-5 years Experience designing cloud-based data pipelines using a variety of tools (e.g., Databricks, Fivetran, or similar).
- 3-5 years Experience with cloud data technologies (e.g., Snowflake, AWS, Azure) and industry best practices around data in cloud platforms.
Knowledge, Skills, and Essential Abilities
- Prior experience as a Data Engineer or similar technical data professional
- Deep knowledge of Databricks and experience processing both streaming (Kinesis) and batch data at scale using python.
- A strong desire to be a Team player and build a collaborative environment.
- Possess a very strong understanding of python and ANSI SQL including accessing remote data outside of the base environment using HTTP, Rest APIs, etc.
- Demonstrated ability to learn new platforms and technologies.
- Build and maintain the data infrastructure required to transform OLTP data sources into usable data structures optimized for business intelligence and data engineering.
- Design, implement, and maintain the next-generation data pipelines using industry leading technologies such as Databricks.
- In collaboration with IT and other business stakeholders, consolidate multiple complex data silos into a single enterprise data warehouse used for companywide data engineering and analytics.
- Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Engineering, IT, and other Data Services teams to assist with data-related technical issues and support their data infrastructure needs.
- Ensure compliance with security, data retention, and privacy policies to keep our data secure across multiple data centers and cloud vendors.
- Create data tools for Data Engineering team members that assist them in building and optimizing our services.
- Maintain current data pipelines to support business needs.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Other duties as assigned by management.
- Experience using multiple programming languages (e.g., SQL, , Spark, Python,) to create reproducible, automated processes to create data infrastructure and products for business users.
- Advanced SQL knowledge and query optimization.
- Experience building, maintaining, and optimizing (balancing speed and cost) cloud data warehouses.
- Deep knowledge of OLAP vs OLTP database architectures
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Experience working with API’s and command line interface tools to automate data extraction, loading, and transformation.
- Experience working with business users to develop automated quality assurance error checks.
- Strong project management and organizational skills.
- Ability to apply analytical and critical thinking to solve complex problems.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Demonstrate outstanding ethics, integrity, and judgment.
- Exceptional written communication skills with specific ability to communicate concepts and ideas concisely and defend their validity.
- Interact with others in a professional and mature manner.
- Advanced understanding modern data warehouse practices
- Experience with relational SQL and NoSQL databases, including MS SQL Server.
- Experience building AWS infrastructure.
PrincePerelson & Associates is an Equal Opportunity Employer and we do not discriminate against applicants due to race, color, religion, sex, national origin, age, disability, genetics, veteran status, or on the basis of disability or any other federal, state or local protected class.
All applicants applying for U.S. job openings must be authorized to work in the United States.