Job Description
OCI (Oracle Cloud) AI Infrastructure Innovation team is pioneering the creation of next-generation AI / HPC networking for GPU superclusters at massive scale. Our mission is to design and deliver state-of-the-art RDMA-based networking-spanning frontend and backend fabrics-that enables customers to achieve high performance for AI training and inference. You will define architecture, lead complex system design, and implement innovative networking software that advances RDMA for GPUs and accelerates storage access. If you thrive at the intersection of large-scale distributed systems, high-speed networking, and AI workloads, this role offers the opportunity to push the boundaries of what's possible.
Responsibilities
Lead architecture, system design, and implementation for high-performance RDMA solutions across OCI's AI / HPC platforms, including frontend and backend fabrics.
Innovate on network and TCP performance and identify changes required across (Kernel, NIC, switch, transport, protocol, storage, GPU comms)
Develop production-grade, high-performance software features with rigorous reliability, observability, and security.
Define performance goals and success metrics; design benchmarks and conduct large-scale experiments to validate throughput, latency, and tail behavior.
Collaborate with GPU platform, storage, database, and control-plane teams to deliver end-to-end solutions and influence OCI-wide network architecture and standards.
Mentor engineers, provide technical leadership / reviews, and contribute to long-term roadmap and technical strategy.
Responsibilities
Required
Deep experience with RDMA networking (RoCE and / or InfiniBand), including congestion control, reliability, and performance tuning at scale.
6-8 years of strong software engineering background delivering high-performance features in large distributed systems
Expertise with networking protocols and systems : TCP / IP, IPv4 / IPv6, DNS, DHCP.
Knowledge of L2 / L3 and data center networking : MPLS, BGP / OSPF / IS-IS; experience with VXLAN and EVPN is a plus.
High-speed packet processing and / or HPC networking experience.
Strong understanding of data structures and algorithms with demonstrated ability to optimize for high scale, low latency, and high throughput.
Experience with storage technologies relevant to high-performance environments, such as NVMe / NVMe-oF, block storage, journaling, IO path optimization, and performance troubleshooting across compute, network, and storage.
Demonstrated ability to lead technically, mentor others, and deliver results in ambiguous, complex problem spaces.
BS / MS in Computer Science, Electrical / Computer Engineering, or equivalent practical experience.
Preferred
Familiarity with AI / HPC stacks and workloads : NCCL / RCCL / MPI, Slurm or other schedulers, GPU communication patterns, collective operations, and large-scale training jobs.
Experience integrating GPU Direct (RDMA) and remote NVMe access in production.
Hands-on with observability and performance tooling (e.g., eBPF, perf, flame graphs, switch / NIC telemetry) and SLO-driven operations at scale.
Bachelors degree in Computer Science preferred and at least 8 years of related experience. Experience working in a large ISP or cloud provider environment. Exposure to commodity Ethernet hardware (Broadcom / Mellanox), protocol experience with BGP / OSPF / IS-IS, TCP, IPv4, IPv6, DNS, DHCP, MPLS. Experience with networking protocols such as TCP / IP, VPN, DNS, DHCP, and SSL. Experience with Internet peering and inter-domain networking. Experience with scripting or automation and datacenter design - Python preferred, but must demonstrate knowledge in a scripting or compiled language. Experience with high level software design / development. Experience with automation systems, framework design / use and deployment. Experience with network modeling and programing - YANG, OpenConfig, NETCONF. Knowledge of network security design, system performance characterization and testing. Knowledge of data flow and telemetry design, deployment and operation. Excellent judgment in influence for product roadmap direction, features, and priorities. Leading individual contributor and a team member that provides direction and mentoring to others. Ability to use professional concepts and company objectives to resolve complex issues in creative and effective ways. Capable of working under limited supervision. Excellent organizational, verbal, and written communication skills.
About Us
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Principal Software Engineer • Trenton, NJ, US