Talent.com
serp_jobs.error_messages.no_longer_accepting
Data Engineer (Washington)

Data Engineer (Washington)

GivzeyWashington, DC, United States
job_description.job_card.variable_hours_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
job_description.job_card.job_description

Data Engineer

We're looking for a Data Engineer to architect and scale the data backbone that powers our AI-driven donor engagement platform. You'll design and own modern, cloud-native data pipelines and infrastructure that deliver clean, trusted, and timely data to our ML and product teams - fueling innovation that revolutionizes the nonprofit industry.

About Givzey :

Givzey is a Boston-based, rapidly growing digital fundraising solutions company, built by fundraisers for nonprofit organizations.

Join a fast-growing, mission-driven team working across two innovative platforms : Givzey, the first donor commitment management platform revolutionizing nonprofit fundraising, and Version2.ai, a cutting-edge AI platform helping individuals and organizations create their most authentic, effective digital presence. As an engineer at the intersection of philanthropy and artificial intelligence, you'll build scalable, high-impact solutions that empower nonprofit fundraisers and redefine how people tell their stories online. We're a collaborative, agile team that values curiosity, autonomy, and purpose. Whether you're refining AI-driven experiences or architecting tools for the future of giving, your work will help shape meaningful technology that makes a difference.

Responsibilities :

  • Design & build data pipelines (batch and real-time) that ingest, transform, and deliver high-quality data from diverse internal and third-party sources
  • Develop and maintain scalable data infrastructure (data lakes, warehouses, and lakehouses) in AWS, ensuring performance, reliability, and cost-efficiency
  • Model data for analytics & ML : create well-governed schemas, dimensional models, and feature stores that power dashboards, experimentation, and ML applications
  • Implement data quality & observability frameworks : automated testing, lineage tracking, data validation, and alerting
  • Collaborate cross-functionally with ML engineers, backend engineers, and product teams to integrate data solutions into production systems
  • Automate infrastructure using IaC and CI / CD best practices for repeatable, auditable deployments
  • Stay current with emerging data technologies and advocate for continuous improvement across tooling, security, and best practices

Requirements :

  • US Citizenship
  • Bachelor's or Master's in Computer Science, Data Engineering, or a related field
  • 2+ years of hands-on experience building and maintaining modern data pipelines using python-based ETL / ELT frameworks
  • Strong Python skills, including deep familiarity with pandas and comfort writing production-grade code for data transformation
  • Fluent in SQL, with a practical understanding of data modeling, query optimization, and warehouse performance trade-offs
  • Experience orchestrating data workflows using modern orchestration frameworks (e.g., Dagster, Airflow, or Prefect)
  • Cloud proficiency (AWS preferred) : S3, Glue, Redshift or Snowflake, Lambda, Step Functions, or similar services on other clouds
  • Proven track record of building performant ETL / ELT pipelines from scratch and optimizing them for cost and scalability
  • Experience with distributed computing and containerized environments (Docker, ECS / EKS)
  • Solid data modeling and database design skills across SQL and NoSQL systems
  • Strong communication & collaboration abilities within cross-functional, agile teams
  • Nice-to-Haves :

  • Dagster experience for orchestrating complex, modular data pipelines
  • Pulumi experience for cloud infrastructure-as-code and automated deployments
  • Hands-on with dbt for analytics engineering and transformation-in-warehouse
  • Familiarity with modern data ingestion tools like dlt, Sling, Fivetran, Airbyte, or Stitch
  • Apache Spark experience, especially useful for working with large-scale batch data or bridging into heavier data science workflows
  • Exposure to real-time / event-driven architectures, including Kafka, Kinesis, or similar stream-processing tools
  • AWS data & analytics certifications (e.g., AWS Certified Data Analytics - Specialty)
  • Exposure to serverless data stacks and cost-optimization strategies
  • Knowledge of data privacy and security best practices (GDPR, SOC 2, HIPAA, etc.)
  • What You'll Do Day-to-Day :

  • Be part of a world-class team focused on inventing solutions that can transform philanthropy
  • Build & refine data pipelines that feed our Sense (AI) and Go (engagement) layers, ensuring tight feedback loops for continuous learning
  • Own the full stack of data work - from ingestion to transformation to serving - contributing daily to our codebase and infrastructure
  • Partner closely with customers, founders, and teammates to understand data pain points, prototype solutions, iterate rapidly, and deploy to production on regular cycles
  • Help craft a beautiful, intuitive product that delights nonprofits and elevates donor impact
  • serp_jobs.job_alerts.create_a_job

    Data Engineer • Washington, DC, United States