Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing big data’ data pipelines, architectures, and data sets.
Experience in cloud services such as AWS EMR, EC2, EKS, Juypter notebooks
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata management, dependency, and workload management.
A successful history of manipulating, processing, and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable big data’ data stores.
Strong project management and organizational skills. Knowledge of agile methods is a plus.
Experience supporting and working with cross-functional teams in a dynamic environment.
Candidate should also have experience using the following software / tools :
Strong experience with AWS cloud services : EC2, EMR, EKS, Snowflake, Elastic-Search
Experience with stream-processing systems : Storm, Spark-Streaming, Kafka etc.
Experience with object-oriented / object function scripting languages : Python, Java, C++, Scala, etc.
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with devops, data pipeline and workflow management tools : Concourse, Terraform, Luigi, Airflow, etc.
Bachelor’s in computer science or related field with at least 10 years for relevant experience is desired.