Talent.com
Data Engineer

Data Engineer

CredinvestmentsBrazil, IN, United States
job_description.job_card.variable_hours_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
job_description.job_card.job_description

Data Engineer (Pipelines, ETL, BigQuery, Python)

Location : Hybrid in San Francisco or London open to exceptional candidates worldwide

Team : Engineering

Experience : 5+ years

Compensation : Competitive salary + Metrics-Based Bonus + Equity

Other perks : Unlimited holiday

Visit our site ?

Power the Data Infrastructure Behind AI-Driven Decision Making

CRED is building the AI Native Command Center for modern businesses a platform that centralizes internal and external data and transforms it into actionable intelligence. From powering sales agents to automating strategic decisions, were building the future of how companies operate.

Backed by a $10M+ seed round and trusted by partners like the PGA and Golden State Warriors , were scaling fast. Our platform processes massive datasets across customers, companies, and behavioral signals and were looking for an experienced Data Engineer to help us build the pipelines and transformations that make our intelligence engine run.

What Youll Do

Design, build, and maintain robust data pipelines for ingesting, cleaning, and transforming structured and semi-structured data

Own and scale ETL / ELT processes using tools like dbt , BigQuery , and Python

Build modular data models that power analytics, product features, and LLM agents

Integrate with external data providers and internal sources to create unified, enriched datasets

Collaborate across AI, product, and engineering to define data contracts and optimize pipelines for downstream use

Monitor pipeline performance, implement observability tools, and ensure data quality and integrity

Operate within fast-paced weekly sprints , delivering real value to end users quickly and iteratively

How We Work

We run on weekly sprints , fast feedback loops, and a bias for shipping. Youll have autonomy and ownership from day one, working cross-functionally to design systems that scale and adapt with our platform and customer needs.

What Were Looking For

5+ years of experience as a data engineer, working on ETL / ELT and large-scale data systems

Strong experience with SQL , BigQuery , dbt , and Python

Deep understanding of data modeling, warehousing, and transformation best practices

Experience building data infrastructure from scratch in cloud-native environments (GCP, AWS, etc.)

Familiarity with modern orchestration tools (Airflow, Dagster, Prefect, etc.)

Strong collaboration skills you can align stakeholders across product, AI, and engineering

Clear communicator and fast problem solver especially in ambiguous, fast-moving settings

Bonus Points

Experience working with data for AI / ML systems , LLMs, or agent-based tools

Built or scaled internal data platforms or customer data products

Exposure to event-driven architectures or real-time data ingestion

Passion for clean, testable code and reproducible workflows

Previous startup or early-stage experience

Why Join CRED?

At CRED, youll be part of a small, high-performance team building foundational systems that power the next generation of AI-native business tools. Youll work with massive datasets, cutting-edge tools, and people who care deeply about speed, impact, and integrity. If youre excited to turn raw data into insight, automation, and intelligence this role is for you.

Apply now or reach out to careers@ with any questions.

#J-18808-Ljbffr

serp_jobs.job_alerts.create_a_job

Data Engineer • Brazil, IN, United States