Role - Data Engineer
location - Dallas Texas
Hybrid - 2 weeks onsite a month
MUST HAVE : ETL (snaplogic or Informatica), Relational databases (Oracle, PostgreSQL, etc.), NoSQL databases (DynamoDB, Elastic search), python, AWS, kafka.
The Expertise and Skills You Bring
- Bachelor’s or Master’s degree in a technology related field (e.g., Computer Engineering, Computer Science, etc.)
- Proven track record as a data engineer, crafting new solutions and re-platforming legacy data products
- Knowledge of how to effectively use multiple types of databases such Relational databases (Oracle, PostgreSQL, etc.), NoSQL databases (DynamoDB, Elastic search) and Graph databases (Neptune, Neo4J, etc.)
- Hand on experience in data pipelines and ETL / ELT process on AWS using Python, Java, etc.
- Demonstrated ability to synthesize and analyze data from multiple sources and deriving insights from it
- Demonstrated experience developing, debugging and tuning complex SQL statements
- Experience in Kafka Data Streaming (or other streaming / messaging services Kinesis, SNS, SQS)
- Experience with DevOps or CI / CD Pipelines using Git, Maven, Jenkins, uDeploy, Stash, Ansible, etc.
- Ability to validate, monitor, and solve issues during development, testing, or in production
- Proven knowledge of AWS via Associate, Professional, or Specialty Certification(s) a big plus
- Knowledge on RestfulAPI development and container processing is a plus
- Excellent communication skills, both through written and verbal channels
- Ability to work effectively in global teams distributed across geographic locations in an Agile way
- Excellent facilitation, influencing and negotiation skills
- Desire and ability to learn and implement new technologies
- Keen ability to see complex challenges from multiple perspectives, and leaning in to solve independently or with others
19 hours ago