Location : USA – Santa Monica / Los Angeles / New Jersey / Seattle, WA
Work Type : Full-Time (Hybrid – 4 Days On-site)
Experience Required : 5–14 Years in Data Engineering
Compensation : USD 100,000 – 140,000 Annual
Eligibility : US Citizen or Green Card Holder
Job Overview
A global broadcasting and entertainment technology leader is seeking a Senior Data Engineer to design and implement data solutions supporting key business KPIs and analytics. The role involves building scalable data pipelines, optimizing data models, and automating quality checks in a cloud-based ecosystem. You'll collaborate with business, analytics, and infrastructure teams to deliver high-performance, reliable, and data-driven systems across the enterprise.
Key Responsibilities
- Partner with cross-functional teams to define, design, and implement end-to-end data pipelines.
- Architect scalable data solutions leveraging Snowflake, Spark, Databricks, and cloud services (AWS / Azure / GCP).
- Build and maintain robust ETL workflows using orchestration tools such as Airflow or Prefect.
- Develop automated data quality checks and ensure system reliability.
- Design and manage data models using dimensional modeling and normalization principles.
- Optimize SQL queries and ETL performance for analytical workloads.
- Implement CI / CD for data pipelines and database deployments using SchemaChange and related tools.
- Conduct ad hoc analyses and ensure high-quality data for reporting and analytics.
- Collaborate with business stakeholders to translate data into actionable insights.
Must-Have Requirements
5+ years of experience in Data Engineering with strong hands-on technical expertise.Advanced proficiency in SQL, data modeling (dimensional, normalized), and performance tuning.Experience with Spark, Snowflake, Python, and orchestration tools (Airflow or Prefect).Exposure to at least one cloud platform (AWS, Azure, or GCP).Strong analytical reasoning and strategic problem-solving abilities.Excellent communication skills and experience working in agile, cross-functional teams.Nice-to-Have Skills
Experience with Kafka and data integration tools such as Datorama, Improvado, or FiveTran.Familiarity with CI / CD pipelines (Jenkins, GitHub Actions) and containerization using Docker.Knowledge of monitoring tools like Datadog.Exposure to AI or data-driven automation frameworks.Preferred Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, or related fields.Experience in the Media or Entertainment industry or similar high-volume data environments.Strong understanding of market and consumer data analytics.Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries : Software Development
J-18808-Ljbffr