Job Description
- 3+ Years of Development / Data Engineering Experience
- 2+ Years Developing with AWS Cloud solutions
- 2+ Years Developing in Spark / PySpark
- 2+ Years Developing AWS Glue ETL
- 2 + Years AWS Storage models (S3 and DynamoDB)
- Some hands-on developing with on-prem ETL tooling (Ab Initio preferred, Informatica)
Requirements
Bachelor of degree in computer or any related field.Minimum of 12+ years' experience in the said role.Strong experience in AWS GLUE, EMR, Hudi, experience extracting data from multiple sources & loading data into Data Lakes, AWS RedshiftExperience working with AWS Elastic search, RDS, PostgreSQL preferredExperience w / AWS services such as Lambda, EMR, SNS / SQS, Event Bridge, Lake Formation & AthenaExperience w / integrating applications / systems (data producers) with Enterprise Kafka topics (Confluent Kafka integration w / AWS S3, Redshift)Experience in Requirements Analysis, Data Analysis, Application Design, Application Development & Integration TestingWorking knowledge of on-prem Extraction, Transformation, Cleansing and Loading methodology and principlesExperience implementing and adding to DevOps principles (GitLab, maintaining CI / CD pipelines)Experience in Java svcs is nice to have.N / A : - Please kindly note that this role is for USC and GC Visa holders only.
Interested and qualified candidates should send their resume to uunyime@modus-lights.com using the job title as the subject of the mail.