Data Engineer Remote Contract to Hire
Overview :
We are looking for a highly skilled Data Engineer with deep expertise in Databricks and a strong understanding of AI / ML workflows. This role is central to building and optimizing scalable data platforms that support advanced analytics and machine learning initiatives. You will work closely with data engineers, data scientists, ML engineers, and business stakeholders to enable intelligent data-driven solutions.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks.
- Build and manage Delta Lake architectures for efficient data storage and retrieval.
- Implement robust ETL / ELT workflows using Databricks notebooks, SQL, and Python.
- Collaborate with AI / ML teams to operationalize models within the Databricks environment.
- Optimize data workflows for performance, reliability, and cost-efficiency in cloud platforms (AWS, Azure, or GCP).
- Ensure data quality, lineage, and governance using tools like Unity Catalog and MLflow.
- Develop CI / CD pipelines for data and ML workflows using Databricks Repos and Git integrations.
- Monitor and troubleshoot production data pipelines and model deployments.
Primary Skill / Experience :
Strong hands-on experience with Databricks, including Spark, Delta Lake, and MLflow.Proficiency in Python, SQL, and distributed data processing.Experience with cloud-native data services (e.g., AWS Glue, Azure Data Factory, GCP Dataflow).Familiarity with machine learning lifecycle and integration of models into data pipelines.Understanding of data warehousing, data lakehouse architecture, and real-time streaming (Kafka, Spark Structured Streaming).Experience with version control, CI / CD, and infrastructure-as-code tools.Excellent communication and collaboration skills.Secondary Skill / Experience :
Certifications in Databricks (e.g., Databricks Certified Data Engineer Associate / Professional).Experience with feature engineering and feature stores in Databricks.Exposure to MLOps practices and tools.Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.Leveraged Databricks for scalable AI and BI solutions, integrating well-known large language models (Anthropic, LLaMA, Gemini) to enhance data-driven insights. Developed agentic AI agents to automate complex decision-making workflows.Tech Stack - Core Tools :
Databricks (Spark, Delta Lake, MLflow, Notebooks)Python & SQLApache Spark (via Databricks)Delta Lake (for lakehouse architecture)Cloud PlatformsAzure, AWS, or GCPCloud Storage (ADLS, S3, GCS)Data IntegrationKafka or Event Hubs (streaming)Auto Loader (Databricks file ingestion)REST APIsAI / MLMLflow (model tracking / deployment)Hugging Face TransformersLangChain / LlamaIndex (LLM integration)LLMs : Anthropic Claude, Meta LLaMA, Google GeminiDevOpsGit (GitHub, GitLab, Azure Repos)Databricks ReposCI / CD : GitHub Actions, Azure DevOpsSecurity & GovernanceUnity CatalogRBACEducation :
Bachelor in Computer Science or Similar