ResponsibilitiesEstablished in 2023, the ByteDance Doubao (Seed) Team is dedicated to building industry-leading AI foundation models.
We aim to do world-leading research and foster both technological and social progress.With a long-term vision and a strong commitment to the AI field, the Team conducts research in a range of areas including natural language processing (NLP), computer vision (CV), and speech recognition and generation.
It has labs and researcher roles in China, Singapore, and the US. Leveraging substantial data and computing resources and through continued investment in these domains, our team has built a proprietary general-purpose model with multimodal capabilities.
In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and was launched to external enterprise clients through Volcano Engine.
The Doubao app is the most used AIGC app in China. Why Join UsCreation is the core of ByteDance's purpose. Our products are built to help imaginations thrive.
This is doubly true of the teams that make our innovations possible. Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo?
Never. Courage? Always. At ByteDance, we create together and grow together. That's how we drive impact - for ourselves, our company, and the users we serve.
Join us. Team IntroductionThe AML Machine Learning Systems team provides E2E machine learning experience and machine learning resources for the company.
The team builds heterogeneous ML training and inference systems based on GPU and AI chips and advances the state-of-the-art of ML systems technology to accelerate models such as stable diffusion and LLM.
The team is also responsible for research and development of hardware acceleration technologies for AI and cloud computing, via technologies such as distributed systems, compilers, HPC, and RDMA networking.
The team is reinventing the ML infra for large scale language models. We have published papers at top tier conferences such as SIGCOMM, NSDI, EuroSys, OSDI, SOSP, MLSys, NeurIPS, etc.
We are looking for talented individuals to join us for an internship in 2025. Internships at Bytedance aim to offer students industry exposure and hands-on experience.
Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance. Internships at ByteDance aim to provide students with hands-on experience in developing fundamental skills and exploring potential career paths.
A vibrant blend of social events and enriching development workshops will be available for you to explore. Here, you will utilize your knowledge in real-world scenarios while laying a strong foundation for personal and professional growth.
This Internship Program runs for 12 weeks beginning in May / June 2025. Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply.
The application limit is applicable to Bytedance and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.
Responsibilities- Research and develop our machine learning systems, including heterogeneous computing architecture, management, scheduling, and monitoring.
- Manage cross-layer optimization of system and AI algorithms and hardware for machine learning (GPU, ASIC).- Implement both general purpose training framework features and model specific optimizations ( LLM, diffusions).
- Improve efficiency and stability for extremely large scale distributed training jobs.Qualifications- Currently in PhD program in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies.
- Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax.- Have basic understanding of how GPU and / or ASIC works.
- Expert in at least one or two programmingf languages in Linux environment : C / C++, CUDA, Python.- Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment.
Preferred QualificationsThe following experiences will be a big plus : - GPU based high performance computing, RDMA high performance network (MPI, NCCL, ibverbs).
- Distributed training framework optimizations such as DeepSpeed, FSDP, Megatron, GSPMD.- AI compiler stacks such as , XLA and MLIR.
- Large scale data processing and parallel computing.- Experiences in designing and operating large scale systems in cloud computing or machine learning.
- Experiences in in-depth CUDA programming and performance tuning (cutlass, triton). ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives.
Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life.
To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach.
We are passionate about this and hope you are too. ByteDance Inc. is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws.
If you need assistance or a reasonable accommodation,