Talent.com
Databricks Data Engineer
Databricks Data EngineerMomento USA • Austin, TX, United States
Databricks Data Engineer

Databricks Data Engineer

Momento USA • Austin, TX, United States
job_description.job_card.1_day_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.temporary
  • serp_jobs.filters_job_card.quick_apply
job_description.job_card.job_description

Position : Databricks Data Engineer

Duration : 9+ Months Contracts

Location : Austin Texas

A public sector client of ours in Austin, Texas is looking for a Databricks Data Engineer for a 9-month contract that is likely to extend.

The Databricks Data Engineer is responsible for developing, maintaining, and optimizing big data solutions using the Databricks Unified Analytics Platform . This role supports key data engineering, machine learning, and analytics initiatives within an organization that relies heavily on large-scale data processing.

The worker will design and implement scalable data pipelines, build efficient ETL / ELT workflows, optimize Apache Spark jobs, and ensure seamless integration with Azure Data Factory . Additional responsibilities include automating deployments, maintaining strong data governance and security standards, and collaborating with cross-functional teams across the organization.

Key Responsibilities

  • Design and develop scalable data pipelines using Apache Spark on Databricks .
  • Implement and maintain ETL / ELT workflows for structured and unstructured data.
  • Optimize Spark jobs for performance and cost-efficiency .
  • Integrate Databricks solutions with Azure Data Factory and other cloud services.
  • Design and maintain data models , schemas, and database structures to support both analytical and operational workloads.
  • Implement data validation , quality checks, and contribute to data governance initiatives including metadata management, data lineage, and cataloging.
  • Apply data security best practices-encryption, access control, auditing-and ensure compliance with relevant regulations.
  • Automate deployments using CI / CD tools and collaborate with DevOps and development teams.
  • Work in agile, cross-functional teams and coordinate with data scientists, analysts, and stakeholders to align on business and technical requirements.
  • Troubleshoot and debug performance and data issues to maintain data pipeline reliability and efficiency.

Note : Candidates who do not meet the minimum requirements will not be considered for this opportunity.

Required Experience :

  • 8 years of experience implementing ETL / ELT workflows for structured and unstructured data.
  • 8 years of experience automating deployments using CI / CD tools.
  • 8 years of experience collaborating with cross-functional teams, including data scientists, analysts, and stakeholders.
  • 8 years of experience designing and maintaining data models, schemas, and database structures for analytical and operational use cases.
  • 8 years of experience evaluating and implementing data storage solutions including Azure Data Lake Storage and data warehouses.
  • 8 years of experience implementing data validation and quality assurance processes.
  • 8 years of experience contributing to data governance efforts such as metadata management, data lineage, and cataloging.
  • 8 years of experience implementing data security measures, including encryption, access controls, and auditing.
  • 8 years of experience programming in Python and R.
  • 8 years of experience using SQL for querying and data manipulation.
  • 8 years of experience with the Azure cloud platform.
  • 8 years of experience working with DevOps, CI / CD pipelines, and version control systems.
  • 8 years of experience in agile and multicultural environments.
  • 8 years of experience troubleshooting and debugging complex systems.
  • 5 years of experience designing and developing scalable data pipelines using Apache Spark on Databricks.
  • 5 years of experience optimizing Spark jobs for performance and cost efficiency.
  • 5 years of experience integrating Databricks with Azure Data Factory.
  • 5 years of experience ensuring data quality, governance, and security using Unity Catalog or Delta Lake.
  • 5 years of experience with Apache Spark architecture, including RDDs, DataFrames, and Spark SQL.
  • 5 years of experience working with Databricks notebooks, clusters, jobs, and Delta Lake.
  • Preferred Qualifications :

  • 1 year of experience with machine learning libraries such as MLflow, Scikit-learn, or TensorFlow.
  • 1 year of experience holding a Databricks Certified Associate Developer for Apache Spark certification.
  • 1 year of experience holding an Azure Data Engineer Associate certification.
  • Thanks,

    Majid M.

    Momento USA | Exceeding Customer Expectations

    440 Benigno Blvd, Unit#A 2nd Floor. Bellmawr, NJ 08031

    Interstate Business Park

    Direct : 856-432-2053 / Desk : 856-456-1805 X 1008 / Fax : (866) 605-1171

    Email : majid@MomentoUsa.com Web : www.MomentoUSA.com

    Minority Certified by SWAM

    One of the fastest growing company in NJ

    Awarded fastest growing Asian American business by Diversitybusiness.com

    E-verified Company

    Note : Momento USA is an Equal Opportunity / Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.

    serp_jobs.job_alerts.create_a_job

    Data Engineer • Austin, TX, United States