Overview
Cobre is Latin America's leading instant b2b payments platform. We solve the region's most complex money movement challenges by building advanced financial infrastructure that enables companies to move money faster, safer, and more efficiently.
We enable instant business payments—local or international, direct or via API—all from a single platform.
Built for fintechs, PSPs, banks, and finance teams that demand speed, control, and efficiency. From real-time payments to automated treasury, we turn complex financial processes into simple experiences.
Cobre is the first platform in Colombia to enable companies to pay both banked and unbanked beneficiaries within the same payment cycle and through a single interface.
We are building the enterprise payments infrastructure of Latin America!
What we are looking for
Data Engineer to join our elite data team at Cobre. This pivotal role sits at the intersection of data technology and financial innovation. You will architect and optimize our data infrastructure, enabling real-time analytics and insights that meet the diverse needs of our clients and business.
You will gain hands-on experience designing, optimizing, and maintaining enterprise-scale data solutions aligned with the AWS Well-Architected Framework. You will primarily work with AWS, Snowflake, and Confluent Cloud, while also staying current with emerging technologies to support migrations and new implementations. Collaborating closely with senior data engineers, you'll sharpen your technical expertise and contribute to building robust, scalable, and efficient data platforms.
As a key member of Cobre's data team, you will merge into an elite group of tech visionaries. Your role entails dynamic collaboration with leaders across departments, aligning strategies, and formulating a unified, ambitious roadmap for Cobre's data capabilities. This role is crucial in weaving different technological and business threads into a coherent, powerful vision for the company's growth and innovation in data engineering within the fintech industry.
Responsibilities
- Data Pipelines : Implement, maintain, and optimize the infrastructure required to support real time, event-driven and batch ETL / ELT processes of our platform. Ensure seamless operation of all processes and develop a comprehensive monitoring solution to proactively address potential issues.
- Data Warehouse : Maintain, monitor, and enhance our data model across the different stages of our medallion architecture, while also implementing and reinforcing data quality processes. Monitor the cost and usage of our data warehouse to identify suboptimal processes and queries for improvement. Advocate for and promote best practices across teams.
- Data Governance : Assist in defining and implementing essential data governance policies and services on our platform to ensure secure scaling in compliance with the highest standards and regulations of the financial industry.
- Technical Mastery and Oversight : Maintain an in-depth understanding of the latest trends in data models, data pipelines, and data tools.
- Cross-functional Collaboration : Work closely with product, engineering, and analytics teams to ensure that the data model supports and enhances product development and customer experience.
Qualifications
Experience : Minimum of 2 years in data engineering, focusing on scalable data pipelines and data models. Proven ability to handle, process, and secure large data sets.Education : Bachelor's degree in Computer Science, Engineering, or a related field.Data Pipelines & Data Models : Proficient in building event-driven, real-time, and batch data pipelines using Python and SQL. Skilled in designing and implementing scalable, well-structured data models within modern data warehouses.Cloud Infrastructure : Experienced in developing and automating data pipelines with cloud-native services. Adept at maintaining and optimizing infrastructure for scalability and cost efficiency. Possession of cloud certifications is a strong advantage.Data Management : Desired knowledge in data governance processes, including the development and implementation of information access policies, data privacy protocols, information retention strategies, and more.Data Architecture Patterns : Desired experience designing and implementing scalable, resilient, and cost-efficient architectures for event-driven, real-time, and batch processing pipelinesRelevant Technologies : Wide variety of AWS services including but not limited to DynamoDb, ElasticSearch, MWAA, Lambda, Glue, MSK, Kinesis, SQS, SNS, Event Bridge, CloudWatch, S3. Snowflake or Data Warehouse experience with Stored Procedures, Views, Materialized Views, External Tables, Streams, Data Models, File Formats like Parquet, Iceberg. Infrastructure as Code knowledge is nice to have, preferably Terraform and Terragrunt; Github and Github Actions; Python; SQL.Background in High-Volume Data Management : Desired experience in handling, processing, and securing large sets of data, with a keen understanding of the challenges and solutions in data-intensive environments.Collaborative Spirit : The ability to work seamlessly across different departments, fostering a collaborative environment that encourages innovation and efficiency.Industry knowledge : Fintech, especially Payments, experience in LatAm markets is a plus.Advanced level of English is a must
J-18808-Ljbffr