Team Introduction
The TikTok Data Ecosystem Team has the vital role of crafting and implementing a storage solution for offline data in TikTok's recommendation system, which caters to more than a billion users.
Their primary objectives are to guarantee system reliability, uninterrupted service, and seamless performance. They aim to create a storage and computing infrastructure that can adapt to various data sources within the recommendation system, accommodating diverse storage needs.
Their ultimate goal is to deliver efficient, affordable data storage with easy-to-use data management tools for the recommendation, search, and advertising functions.
We are looking for talented individuals to join our team in 2025. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities.
Co-create a future driven by your inspiration with TikTok. Successful candidates must be able to commit to an onboarding date by end of year 2025.
Applications will be reviewed on a rolling basis. We encourage you to apply as early as possible. Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply.
The application limit is applicable to TikTok and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.
Online Assessment Candidates who pass resume evaluation will be invited to participate in TikTok's technical online assessment in HackerRank.
Responsibilities : 1. Design and implement an offline / real-time data architecture for large-scale recommendation systems.
2. Design and implement a flexible, scalable, stable, and high-performance storage system and computation model.3. Troubleshoot production systems, and design and implement necessary mechanisms and tools to ensure the overall stability of production systems.
4. Build industry-leading distributed systems such as offline and online storage, batch, and stream processing frameworks, providing reliable infrastructure for massive data and large-scale business systems.
- Minimum Qualifications : - Bachelor's Degree or above, majoring in Computer Science, or related fields, with 3+ years of experience building scalable systems;
- Proficiency in common big data processing systems like Spark / Flink at the source code level is required, with a preference for experience in customizing or extending these systems;
- A deep understanding of the source code of at least one data lake technology, such as Hudi, Iceberg, or DeltaLake, is highly valuable and should be prominently showcased in your resume, especially if you have practical implementation or customisation experience;
- Knowledge of HDFS principles is expected, and familiarity with columnar storage formats like Parquet / ORC is an additional advantage;
- Prior experience in data warehousing modeling;- Proficiency in programming languages such as Java, C++, and Scala is essential, along with strong coding skills and the ability to troubleshoot effectively;
- Experience with other big data systems / frameworks like Hive, HBase, or Kudu is a plus;- A willingness to tackle challenging problems without clear solutions, a strong enthusiasm for learning new technologies, and prior experience in managing large-scale data (in the petabyte range) are all advantageous qualities.