Job Description
Job Description
Title : Data Engineer II / III
Location : Chicago, IL
Salary : $100,000 - $150,000
As a Data Engineer II / III , you will play a key role in designing, building, and maintaining company’s modern data platform. You’ll own complex data pipelines and integrations that support strategic decision-making and business operations. As a mid-level engineer, you'll collaborate closely with product, analytics, and engineering teams to improve data quality, performance, and accessibility. You’ll also contribute to architectural decisions, mentor junior engineers, and help raise the bar for data engineering across the organization.
This position is ideal for someone who has already built robust pipelines, thrives on solving data challenges at scale, and wants to deepen their impact in a growing, mission-driven company.
This role reports to Executive Director, Technical Strategy and Operations and is located in Chicago, offering a hybrid work environment with a minimum of 3 days required in the office every week and additional days as business needs arise.
Responsibilities :
- Design and implement scalable, maintainable ETL / ELT pipelines for a variety of use cases (analytics, operations, product enablement)
- Build and optimize integrations with cloud services, databases, APIs, and third-party platforms
- Own production data workflows end-to-end, including testing, deployment, monitoring, and troubleshooting
- Collaborate with cross-functional stakeholders to understand business needs and translate them into technical data solutions
- Lead technical discussions and participate in architecture reviews to shape our evolving data platform
- Write clean, well-documented, production-grade code in Python and SQL
- Improve data model design and data warehouse performance (e.g., partitioning, indexing, denormalization strategies)
- Champion best practices around testing, observability, CI / CD, and data governance
- Mentor junior team members and contribute to peer code reviews
Qualifications :
3+ years of experience in a data engineering or software engineering role, with a strong track record of delivering robust data solutionsProficiency in Python and advanced SQL for complex data transformations and performance tuningExperience building and maintaining production pipelines using tools like Airflow, dbt, or similar workflow / orchestration toolsStrong understanding of cloud-based data infrastructure (e.g., AWS, GCP, or Azure)Knowledge of data modeling techniques and data warehouse design (e.g., star / snowflake schemas)Experience working with structured and semi-structured data from APIs, SaaS tools, and databasesFamiliarity with version control (Git), CI / CD, and Agile development methodologiesStrong communication and collaboration skillsPreferred :
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related technical fieldExperience with modern data warehouses like Redshift, BigQuery, or SnowflakeExposure to modern DevOps / dataops practices (e.g., Terraform, Docker, dbt Cloud)Experience integrating with Salesforce or other CRM / marketing platformsKnowledge of data privacy and compliance considerations (e.g., FERPA, GDPR)Benefits :
Hybrid work arrangementPaid parental leaveMedical, dental, and vision insuranceFlexible Spending Account (FSA) - Health Savings Account (HSA)Employer-paid short-term disability insurance - Optional long-term disability insurance401(k) with immediate employer match vestingGenerous PTO plan with accrual increasing by tenureTuition reimbursement programDiscounted onsite gym accessOptional pet insuranceAdditional perks and benefits