Big Data Engineer
Exciting opportunity to work with an enterprise organization. Seeking an experienced hands-on enterprise Data Engineer Lead.
You will be supporting the move to cloud for the first time in history for this company. Excellent room for growth!
RESPONSIBILITIES :
- Sharing project solutions and outcomes with colleagues to improve delivery on future projects.
- Analyzing and translating business needs into long-term solution data pipelines.
- Evaluating existing data systems.
- Working with the development team to create conceptual data flows.
- Developing best practices for data coding to ensure consistency within the system.
- Reviewing modifications of existing systems for cross-compatibility.
- Implementing data strategies and developing data integration points.
- Evaluating implemented data systems for variances, discrepancies, and efficiency.
- Troubleshooting and optimizing data systems.
- Interpreting and delivering impactful strategic plans for improving data integration, data quality, and data delivery in support of business initiatives and roadmaps.
- Formulating and articulating architectural trade-offs across solution options before recommending an optimal solution ensuring technical requirements are met.
- Motivating and developing staff through teaching, empowering, and influencing technical and consulting soft skills.
- Driving innovative technology solutions through leadership on emerging trends.
This is a 12 month contract opportunity that will auto renew. This is a 100% remote position. You must be willing to work EST hours.
Excellent opportunity for a long term contract!
Visionaire Partners offers all full-time W2 contractors a comprehensive benefits package for the contractor, their spouses / domestic partners, and dependents.
Options include 401k with up to 4% match, medical, dental, vision, life insurance, short and long-term disability, critical illness, hospital indemnity, accident coverage, and Medical and Dependent Care Flexible Spending Accounts.
REQUIRED SKILLS :
- 2+ years Hands-on experience with Python development or Scala i.e. PySpark / Scala-Spark
- Must have hands-on experience with Databricks
- 3+ years hands-on experience with high-velocity high-volume stream processing : Apache Kafka and Spark Streaming (version 3.
0, 2.1, 3.2, or 3.3) (A. Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka B.
Deep knowledge of troubleshooting and tuning Spark applications)
- 3+ years Hands-on experience building, testing, and optimizing Big Data’ data ingestion pipelines, architectures and data sets
- 3+ years Experience in successfully building and deploying a new Cloud data platform on Azure / AWS / GCP
- 3+ years of experience with database solutions like Kudu / Impala, or Delta Lake or Snowflake or BigQuery
- Experience in Azure / AWS Serverless technologies, like, S3, Kinesis / MSK, lambda, and Glue
- 3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV
- Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity Catalog
- Experience with data ingestion of different file formats across like JSON, XML, CSV
- 2+ years Experience with NoSQL databases, including HBASE and / or Cassandra
- Knowledge of Unix / Linux platform and shell scripting is a must
- 3+ years Experience with Cloud platforms e.g. AWS, GCP, etc. Experience with database solutions like Kudu / Impala, or Delta Lake or Snowflake or BigQuery
PREFERRED SKILLS :
- Strong SQL skills with ability to write intermediate complexity queries
- Strong understanding of Relational & Dimensional modeling
- Experience with GIT code versioning software
- Experience with REST API and Web Services
- Good business analyst and requirements gathering / writing skills
Must be authorized to work in the U.S. / Sponsorships are not available.