AI TalentFlow : Databricks Engineer About AI TalentFlow : AI TalentFlow specializes in accelerating startup growth, scaling enterprise solutions, and nurturing the career growth of the individuals who power those organizations.
Our success is intertwined with your success, and we are committed to fueling the innovative spirit of our clients with top-tier talent in AI, Data, Analytics, Product, and Agile.
The Opportunity : Play a critical role in leveraging data to drive innovation within the financial services sector! You will collaborate with business stakeholders, data scientists, and software engineers to design, develop, and implement efficient and scalable data pipelines on Databricks.
You will also be responsible for identifying and building valuable use cases for data within the financial services domain (e.
g., trading, broker-dealer, asset management, investment management). This role offers the chance to make a real impact by contributing to the development of robust and repeatable data-driven solutions that deliver tangible business value.
Responsibilities : Partner with business stakeholders to understand their data needs and identify high-impact use cases for data within the financial services industry.
Design, develop, and implement efficient and scalable data pipelines using Apache Spark on Databricks. Develop and maintain data pipelines for various financial services domains, such as trading, risk management, portfolio analysis, and regulatory compliance.
Perform data cleaning and transformation tasks to ensure data quality and consistency. Build robust and reusable data pipelines that adhere to best practices and coding standards.
Collaborate with data scientists to prepare data for machine learning models. Develop and integrate Python scripts and SQL queries for data manipulation and analysis.
Leverage cloud platforms like AWS and Snowflake for data storage and integration. Monitor and maintain data pipelines to ensure smooth operation and identify potential issues.
Stay up-to-date on the latest advancements in data engineering technologies and financial services trends. Document data pipelines and processes for clear communication and knowledge sharing.
Qualifications : 3+ years of experience as a Data Engineer or related role. Strong understanding of data warehousing concepts and data modeling principles.
Extensive experience with Apache Spark and Databricks (or similar data processing frameworks). Proficiency in Python and SQL, with experience in data manipulation and analysis libraries.
Experience building data pipelines for the financial services industry (e.g., trading, broker-dealer, asset management, investment management) is a strong plus.
Familiarity with cloud platforms like AWS and Snowflake for data storage and integration. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.
A passion for learning and staying up-to-date on the latest advancements in data engineering technologies. Powered by JazzHR