Description :
- As a Hadoop ETL Developer to support production (L) in Treasury ADS, individual will be responsible for understanding design, propose high level and detailed design solutions, propose out of box technical solutions for resolving business problems and technical problems arise during the real time in production.
- Engage in discussions with information architecture team for coming out with design solutions, proposing new technology adoption ideas, attending project meetings, partnering with near shore and offshore teammates in a matrix environment, coordinating with other support teams like L, development, testing, upstream and downstream partners, etc.
- Client support activities like analysis, coding, propose improvement ideas, drive the development activities at offshore.
- Partnering with near shore and offshore teammates in a matrix environment, coordinating with other support teams like L, development, testing, up / down stream partners, etc.,
- Work with the Business Analysts to understand the requirements work on high level detailed design to address the real time issues in production.
- Work with Information Architecture team for proposing technical solutions to business problems
- Identify gaps in technology and propose viable solutions
- Take accountability of the technical deliveries from offshore
- Understand Hadoop, Spark, Python and other Eco Systems like Impala, Hive, Oozie, Pig etc., Autosys and UNIX Shell Scripting.
- Work with the development teams and QA during the post code development phase
- Identify improvement areas within the application and work with the respective teams to implement the same
- Ensuring adherence to defined process quality standards, best practices, high quality levels in all deliverables
- Adhere to team’s governing principles and policies
- Strong working knowledge of ETL, database technologies, big data and data processing skills
- years of experience developing ETL solutions using any ETL tool like Informatica, SSIS, etc.,
- years of experience developing applications using Hadoop, Spark, Impala, Hive Python.
- years of experience in running, using and troubleshooting the ETL Cloudera Hadoop Ecosystem Hadoop FS, Hive, Impala, Spark, Kafka, Hue, Oozie, Yarn, Sqoop, Flume.
- Experience on Autosys JIL scripting.
- Proficient scripting skills Unix shell Perl Scripting
- Experience troubleshooting data-related issues.
- Experience processing large amounts of structured and unstructured data with MapReduce.
- Experience in SQL and Relation Database developing data extraction applications.
- Experience with data movement and transformation technologies.
- Good to have experience in Python / Scaln programming
- Have a good understanding of the EE process of the application
- Have a good understanding of all aspects of the application like upstream, database model, data processing and data distribution layers.
Ideal candidate for this position :
Informatica, Hadoop Oracle skills with good communication skills
30+ days ago