A financial firm is looking for a Data Engineer to join their team in Alpharetta, GA.
Pay : $69 / hr
Hybrid - 3 days per week onsite
Responsibilities :
- Able to establish, modify or maintain data structures and associated components according to design
- Understands and documents business data requirements
- Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit / Domain Level
- Understands XML / JSON and schema development / reuse, database concepts, database designs, Open Source and NoSQL concepts
- Work with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs
- Takes part in reviews of own work and reviews of colleagues' work
- Able to participate in assigned team’s software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.
in support of data engineering pipeline development
- Understands infrastructure technologies and components like servers, databases, and networking concepts
- Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both
- Metadata integration in data pipelines
- Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases
Qualifications :
Up to 4 years of software development experience in a professional environment and / or comparable experience such as :
- Understanding of Agile or other rapid application development methods
- Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate
- Exposure to methods relating to application and database design, development, and automated testing
- Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.)
- Knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases
- Experience with Linux and Python scripting as well as large scale data processing technology such as spark
- Exposure to Big Data technology and NoSQL design and coding with variety of data stores (document, column family, graph, etc.)
- Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines
- Nice to have : Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment
- Bachelor’s degree in computer science, computer science engineering, or related field required
30+ days ago