Spark driver jobs in Santa Cruz, CA
Spark Developer
Role. Spark Developer with ScalaLocation. SFO CA (100. Onsite from Day 1. No flexibility) Try to submit.. Hands on developer, who has strong understanding on Apache Spark and is good in preferably Scala (if..
Azure Data Engineer
Dear, Big data engineer Dallas, TX Candidate must be proficient in Spark, Scala Python. Please send me the resumes directly so that I can shortlist and get back to you. Experience with Data..
Machine Learning Engineer
You also have built large scale models in Spark and are able to make architecture and technical.. Deep understanding of tree based models such as XGBoost. experience with XGBoost on Spark is a plus..
Software Engineer, Machine Learning
Deploy the services to new accounts, working closely with partners and engineers. Requirements and Qualifications. Thorough knowledge of Python, Spark, Java, and PostgreSQL Experience working..
Data Engineer
Experience with distributed processing technologies and frameworks, such as Hadoop, Spark, Kafka, and.. Ensure data quality, consistency, and accuracy. Build scalable data pipelines (SparkSQL & Scala..
Data Engineer with OMS experience
Proven experience with cloud platforms (GCP) and big data technologies (e.g., Airflow, Spark, DBT, Databricks. BigQuery, GCP Services). Hands on experience with ETL tools and processes. Strong..
Senior Data Scientist & Engineer - W2 Role
Years of Experience with big data technologies (i.e., Spark, Hadoop) and database technologies (i.e., SQL, NoSQL) Good understanding of machine learning applications. Strong programming skills..
Data Engineering Manager
techniques Proficiency in SQL and Oracle, Python, Kafka, Apache Spark, Hadoop Programming skills. Particularly Java, XML, Javascript, or ETL frameworks Detail oriented, with excellent..
Big Data Hadoop Engineer
Strongexperience working in Real Time analytics like Spark Kafka Storm Experiencewith Jenkins, JIRA Expertisein Unix Linux environment in writing scripts and schedule execute jobs.A reasonable..
Data Engineer
PySpark) and big data technologies such as Hadoop, Spark, and Hive Extensive experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS) for data processing..
Senior Data Scientist
Utilize large scale data tools like Spark and Hive. Perform analyses to guide product, engineering, and operations, and clearly communicate results to stakeholders. Develop and apply machine..
Sr Scala/Java Software Developer
Experience with real time processing technologies like Spark Storm Flink. Experience with Hadoop Hive.. Technologies such as Kafka, Spark on Hadoop Hive S3, AWS Lambda, Terraform, Kubernetes, etc. Participate..
Lead Data Architect
Technical Skills. Proficiency in data modeling database design and data warehousing technologies (e.g. SQL NoSQL Hadoop Spark). Experience with cloud data platforms (e.g. AWS Azure Google Cloud..
Data Analyst
AWS S3 strongly preferred. br. Jira and Excel for Data Analysis br. Knowledge of Data Mapping and Models br. Table Extraction br. br. b Nice to have. b. br. Spark experience, other data..
Driver
If you have previous employment experience in transportation (such as a delivery driver, driver.. We also welcome drivers who have worked with other peer to peer ridesharing or driving networks. Drivers..
Tech Lead - Machine Learning Engineering
Python, Tensorflow, SQL, Spark, Docker, GCP. AZURE. AWS THE BENEFITS As a Tech Lead, you can expect a base salary between. 230,000 to. 275,000 (based on experience) plus competitive benefits..
CDL A DRIVERS 1500 Weekly Recent Graduates Welcome
Class A drivers (including recent graduates that need training) run regionally Drivers are home every.. 500 Pay is based on experience and goes from. 0.43 to. 0.61 cpm Drivers cover about 1800 miles week 100..
Driver
Amazon needs drivers ASAP! Drive an Amazon branded vehicle delivering packages to your community. Work 4.. Delivery Driver Partners must have a valid drivers license, and minimum auto insurance and complete a..
Software Engineer - Pretraining Data
Design and implement multimodal web crawlers for large scale data collection Develop and maintain large scale data processing pipelines using tools like Ray, Apache Spark, and Google BigQuery..