Job Description
Responsibilities
- Write ETL packages and database code based on business requirements or user stories, architectural requirements, and existing code.
- Assemble large, complex sets of data using AWS and SQL technologies.
- Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines.
- Work as part of the data processing team on performance tuning and optimization, query optimization, index tuning, caching, buffer tuning, and data archiving strategies.
- Estimate and plan development work, track and report on task progress, and deliver work on schedule.
- Ensure all deliverables are high quality by setting development standards, adhering to the standards and participating in code reviews.
- Mentor, support and manage junior team members.
- Keep up to speed on new and emerging technologies and products that will be of interest to our clients
Qualifications
- Bachelor's degree in information technology, computer science, or related field.
- Advanced SQL coding 5+ years, tuning and query optimization, especially within cloud-based data warehouses like Amazon Redshift.
- At least two years of experience with Python.
- Solid various data processing tools like Databricks, Altyrex, AWS Glue, etc
- Excellent communication skills. Must have an advanced proficiency level in English. Ability and comfort with presenting work by phone or within small groups.
- Excellent customer service skills. Ability to service clients, partners, and internal stakeholders by understanding their needs, translating those needs into creative solutions, and delivering those solutions with diligence and a sense of urgency.
- Self-motivated and a self-starter with a strong ability to multitask projects / tasks effectively.
- Digital media experience, particularly experience working in ad tech industry with a data-centric role plus.
30+ days ago