Talent.com
Data Engineer – Contract to Hire role
Data Engineer – Contract to Hire roleCloudious LLC • United States
Data Engineer – Contract to Hire role

Data Engineer – Contract to Hire role

Cloudious LLC • United States
job_description.job_card.variable_days_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
  • serp_jobs.filters_job_card.quick_apply
job_description.job_card.job_description

Data Engineer Remote Contract to Hire

Overview :

We are looking for a highly skilled Data Engineer with deep expertise in Databricks and a strong understanding of AI / ML workflows. This role is central to building and optimizing scalable data platforms that support advanced analytics and machine learning initiatives. You will work closely with data engineers, data scientists, ML engineers, and business stakeholders to enable intelligent data-driven solutions.

Key Responsibilities :

  • Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks.
  • Build and manage Delta Lake architectures for efficient data storage and retrieval.
  • Implement robust ETL / ELT workflows using Databricks notebooks, SQL, and Python.
  • Collaborate with AI / ML teams to operationalize models within the Databricks environment.
  • Optimize data workflows for performance, reliability, and cost-efficiency in cloud platforms (AWS, Azure, or GCP).
  • Ensure data quality, lineage, and governance using tools like Unity Catalog and MLflow.
  • Develop CI / CD pipelines for data and ML workflows using Databricks Repos and Git integrations.
  • Monitor and troubleshoot production data pipelines and model deployments.

Primary Skill / Experience :

  • Strong hands-on experience with Databricks, including Spark, Delta Lake, and MLflow.
  • Proficiency in Python, SQL, and distributed data processing.
  • Experience with cloud-native data services (e.g., AWS Glue, Azure Data Factory, GCP Dataflow).
  • Familiarity with machine learning lifecycle and integration of models into data pipelines.
  • Understanding of data warehousing, data lakehouse architecture, and real-time streaming (Kafka, Spark Structured Streaming).
  • Experience with version control, CI / CD, and infrastructure-as-code tools.
  • Excellent communication and collaboration skills.
  • Secondary Skill / Experience :

  • Certifications in Databricks (e.g., Databricks Certified Data Engineer Associate / Professional).
  • Experience with feature engineering and feature stores in Databricks.
  • Exposure to MLOps practices and tools.
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
  • Leveraged Databricks for scalable AI and BI solutions, integrating well-known large language models (Anthropic, LLaMA, Gemini) to enhance data-driven insights. Developed agentic AI agents to automate complex decision-making workflows.
  • Tech Stack - Core Tools :

  • Databricks (Spark, Delta Lake, MLflow, Notebooks)
  • Python & SQL
  • Apache Spark (via Databricks)
  • Delta Lake (for lakehouse architecture)
  • Cloud Platforms
  • Azure, AWS, or GCP
  • Cloud Storage (ADLS, S3, GCS)
  • Data Integration
  • Kafka or Event Hubs (streaming)
  • Auto Loader (Databricks file ingestion)
  • REST APIs
  • AI / ML
  • MLflow (model tracking / deployment)
  • Hugging Face Transformers
  • LangChain / LlamaIndex (LLM integration)
  • LLMs : Anthropic Claude, Meta LLaMA, Google Gemini
  • DevOps
  • Git (GitHub, GitLab, Azure Repos)
  • Databricks Repos
  • CI / CD : GitHub Actions, Azure DevOps
  • Security & Governance
  • Unity Catalog
  • RBAC
  • Education :

    Bachelor in Computer Science or Similar

    serp_jobs.job_alerts.create_a_job

    Data Engineer • United States