Talent.com
serp_jobs.error_messages.no_longer_accepting
Data Engineer

Data Engineer

RIT Solutions, Inc.Minneapolis, MN, United States
job_description.job_card.variable_hours_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
job_description.job_card.job_description

Data Engineer (ADF, Orchestration, Data Flows, Snowflake) | REMOTE

Minneapolis, MN - Remote

Job Description

  • 10+Years of IT experience
  • Background candidates should ideally have come from SSIS / SQL transitioning to Azure Data Factory (Data Integration) Snowflake (Source and Target End Point)
  • Azure has a lot of components to it such as Fabric, DataBricks, ADF, Pyspark
  • They need candidates that can use ADF as both orchestration and integration
  • many candidates claim to have ADF but for orchestration really just use Pyspark to trigger the jobs through ADF
  • They need to be advanced at Building the Integration through data flows logic as well

100% Telecommute

Work Hours : 9am-5pm CST

Project :

  • As a member of the Data Management team, the Data Engineer supports the Alabama EDS by developing and maintaining workflows, identifying, and resolving data quality issues, and optimizing processes to improve performance.
  • The Data Engineer will also support intrastate agencies by monitoring automated data extracts and working directly with state partners to create new extracts based on business specifications.
  • Responsibilities :

  • Develop and manage effective working relationships with other departments, groups, and personnel with whom work must be coordinated or interfaced
  • Efficiently communicate with ETL architect while understanding the requirements and business process knowledge in order to transform the data in a way that's geared towards the needs of end users
  • Assist in the overall architecture of the ETL Design, and proactively provide inputs in designing, implementing, and automating the ETL flows
  • Investigate and mine data to identify potential issues within ETL pipelines, notify end-users and propose adequate solutions
  • Developing ETL pipelines and data flows in and out of the data warehouse using a combination of Azure Data Factory and Snowflake toolsets
  • Developing idempotent ETL process design so that interrupted, incomplete, or failed processes can be rerun without errors using ADF dataflows and Pipelines
  • Ability to work in Snowflake Virtual Warehouses as needed in Snowflake and automate data pipelines using Snowpipe for tedious ETL problems
  • Capturing changes in data dimensions and maintaining versions of them using Stream sets in snowflake and scheduling them using Tasks
  • Optimize every step of the data movement not only limited to source and during travel but also when it's at rest in the database for accelerated responses
  • Must have the ability to build a highly efficient orchestrator that can schedule jobs, execute workflows, perform Data quality checks, and coordinate dependencies among tasks
  • Responsible for testing of ETL system code, data design, and pipelines and data flows. Root cause analysis on all processes and resolving production issues are also a part of the process and routine tests on databases and data flow and pipeline testing
  • Responsible for documenting the implementations, and test cases as well as responsible for building deployment documents needed for CI / CD
  • Ideal Background : Data Engineer with Healthcare (Medicaid) and Microsoft Azure based experience with Snowflake and Azure Data Factory

    TOP REQUIREMENTS :

  • 5+ years of Data engineering experience with a focus on Data Warehousing
  • 2+ years of experience creating pipelines in Azure Data Factory (ADF)
  • 3+ years of experience creating stored procedures with Oracle PL / SQL, SQL Server T-SQL, or Snowflake SQL
  • Required :

  • 5+ years of Data engineering experience with a focus on Data Warehousing
  • 2+ years of experience creating pipelines in Azure Data Factory (ADF)
  • 5+ years developing ETL using Informatica PowerCenter, SSIS, Azure Data Factory, or similar tools.
  • 5+ years of experience with Relational Databases, such as Oracle, Snowflake, SQL Server, etc.
  • 3+ years of experience creating stored procedures with Oracle PL / SQL, SQL Server T-SQL, or Snowflake SQL
  • 2+ years of experience with GitHub, SVN, or similar source control systems
  • 2+ years of experience processing structured and un-structured data.
  • Experience with HL7 and FHIR standards, and processing files in these formats.
  • 3+ years analyzing project requirements and developing detailed specifications for ETL requirements.
  • Excellent problem-solving and analytical skills, with the ability to troubleshoot and optimize data pipelines.
  • Ability to adapt to evolving technologies and changing business requirements.
  • Bachelors or Advanced Degree in a related field such as Information Technology / Computer Science, Mathematics / Statistics, Analytics, Business
  • Preferred :

  • 2+ years of batch or PowerShell scripting
  • 2+ years of experience with Python scripting.
  • 3+ years of data modeling experience in a data warehouse environment
  • Experience or familiarity with Informatica Intelligent Cloud Services (specifically Data Integration)
  • Experience designing and building APIs in Snowflake and ADF (e.g. REST, RPC)
  • Experience with State Medicaid / Medicare / Healthcare applications
  • Azure certifications related to data engineering or data analytics.
  • serp_jobs.job_alerts.create_a_job

    Data Engineer • Minneapolis, MN, United States