A company is looking for a Data Engineer with expertise in AWS data services, Spark, Kafka, and Python.
Key Responsibilities
Develop and maintain batch and streaming data pipelines using AWS, Spark, and Kafka
Implement Medallion Architecture layers to structure and transform data
Support real-time data processing for trading and market data
Required Qualifications
Hands-on experience with AWS (S3, Glue, Redshift, EMR, Kinesis)
Strong knowledge of Apache Spark, Kafka, and Python
Familiarity with Parquet, Iceberg, and Medallion Architecture
Understanding of financial data flows and risk / compliance reporting
3-6 years of experience in Data Engineering, preferably in Finance / Capital Markets
Data Engineer • Charleston, South Carolina, United States