Talent.com
Lead Data Engineer - Snowflake - Kafka - SQL - Python

Lead Data Engineer - Snowflake - Kafka - SQL - Python

ZEEKTEKSt Louis, MO, United States
job_description.job_card.variable_days_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.temporary
job_description.job_card.job_description

We have a 15 month contract, opportunity to extend or convert, for a highly skilled data engineering professional with 5-7 years of experience leading the development and operationalization of data pipelines within modern cloud environments. They are an expert in Snowflake and have strong proficiency in Kafka / streaming data architectures, SQL, and Python, with hands-on experience building scalable, production-ready data solutions. This individual excels at sourcing data from diverse systems, including relational sources like Oracle and SQL Server and JSON-based sources like MongoDB and Kafka, leveraging AWS ecosystem tools and vendor platforms such as Upsolver and Informatica IDMC. As a team leader, they drive architectural discussions, mentor junior engineers, enforce SDLC best practices, and ensure high-quality data ingestion, transformation, and validation processes. With strong analytical, problem-solving, and communication skills, they thrive in agile, sprint-based environments, collaborating with cross-functional teams to deliver clean, optimized data for reporting and advanced analytics needs. 100% Remote.

MUST HAVES :

  • 5-7+ years of experience
  • Snowflake
  • SQL / Python
  • Kafka / Streaming Data comprehension
  • NICE TO HAVES :
  • Healthcare
  • GIT

Disqualifiers :

  • Missing Snowflake or SQL experience
  • About the Role :

  • The Data Sourcing team is established to help source data into our Snowflake environment. We source from multiple source applications that are rooted as relational sources like Oracle / SQL Server or JSON based sources from Mongo / Kafka. We leverage different utilities to write the data to Snowflake that include, but not limited to, Upsolver and Informatica IDMC. Once the data is in Snowflake, we write data to different layers of our Snowflake applications leveraging stored procedures written in Python.
  • We work with different source team who are SMEs on the data sets and downstream teams who help further refine the data into different datasets for reporting needs. Our team works in an agile fashion in two-week sprints leveraging agile best practices.
  • Responsibilities :

  • This specific role would be more focused around our JSON based sourcing into Snowflake leveraging different tools in the AWS ecosystem and different vendor tools. They would also be working within Snowflake platform to write data to different Snowflake tables via stored procedures.
  • This role is for a lead role so they would be expected to carry a heavier work load and step in to help other engineers on the team when deadlines are approaching and projects are in jeopardy.
  • As a lead, it is also expected that you are participating and leading team discussions about projects and architecture. You should ensure that all aspects of the SDLC process are respected where Definition of Done is met with all testing and validations have been accounted for before completing their work. Also highlighting any areas of improvement for the team from a technical and process perspective.
  • Additionally helping mentor junior employees in their position on how to follow best practices in their role and career.
  • Job Description : Position Purpose :

    Uses advanced expertise and knowledge to lead the development and operationalization of data pipelines to make data available for consumption (reports and advanced analytics), including data ingestion, data transformation, data validation / quality, data pipeline optimization, and orchestration. Works closely with the DevSecOps Engineer during continuous integration and continuous deployment.

    Education / Experience :

    A Bachelor's degree in a quantitative or business field (e.g., statistics, mathematics, engineering, computer science) and requires 5 - 7 years of related experience.

    Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position.

    Technical Skills :

    One or more of the following skills are desired.

    Experience with Big Data; Data Processing

    Experience with Other : diagnosing system issues, engaging in data validation, and providing quality assurance testing

    Experience with Data Manipulation; Data Mining

    Experience with Other :

  • Experience working in a production cloud infrastructure
  • Experience with one or more of the following : Programming Concepts; Programming Tools; Python (Programming Language); SQL (Programming Language)

    Knowledge of Microsoft SQL Servers; SQL (Programming Language)

    Experience with Kafka

    Experience with Python

    Experience with scheduling utilities

    Strong comprehension on testing, streaming architectures, process monitoring and agile workflows.

    Soft Skills :

    Intermediate - Seeks to acquire knowledge in area of specialty

    Intermediate - Ability to identify basic problems and procedural irregularities, collect data, establish facts, and draw valid conclusions

    Intermediate - Ability to work independently

    Intermediate - Demonstrated analytical skills

    Intermediate - Demonstrated project management skills

    Intermediate - Demonstrates a high level of accuracy, even under pressure

    Intermediate - Demonstrates excellent judgment and decision making skills

    Intermediate - Ability to communicate and make recommendations to upper management

    Intermediate - Ability to drive multiple projects to successful completion

    Intermediate - Possesses technical aptitude

    serp_jobs.job_alerts.create_a_job

    Data Engineer Snowflake • St Louis, MO, United States