Technical Product Manager - Soperator
VerifiedAbout the Role
<div class="content-intro"><p><strong data-stringify-type="bold">Why work at Nebius<br></strong>Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.</p> <p><strong>Where we work<br></strong>Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.</p></div><p><strong><span data-ccp-props="{}">The role</span></strong></p> <p>At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.<br>We’re looking for a <strong>Technical Product Manager</strong> to own product direction for <strong>Soperator</strong> — our Slurm-on-Kubernetes control plane for GPU clusters.<br>In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.<br>If you care about systems that combine <strong>performance, reliability, and developer experience</strong> at the frontier of AI infrastructure, this role is for you.</p> <p><strong><span data-contrast="auto"><span data-ccp-charstyle="Strong">Your responsibilities will include:</span></span></strong><span data-ccp-props="{"134233117":true,"134233118":true}"> </span></p> <p><span data-contrast="auto"><span data-ccp-parastyle="Normal (Web)">• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.<br>• Define product direction end-to-end: <strong>problem discovery → solution design → delivery → adoption</strong>.<br>• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.<br>• Drive execution across platform teams: <strong>compute, networking, storage, observability, IAM and etc.</strong><br>• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.<br>• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.<br>• Lead the <strong>open-source strategy and execution</strong> for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community. </span></span><span data-ccp-props="{"134233117":true,"134233118":true}"> </span></p> <p><strong><span data-contrast="auto"><span data-ccp-charstyle="Strong">We expect you to have:</span></span></strong><span data-ccp-props="{"134233117":true,"134233118":true}"> </span></p> <p>• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.<br>• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.<br>• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., <strong>Slurm, Kubernetes, Ray</strong>).<br>• Track record of shipping technically complex products with multiple engineering teams.<br>• Strong communication and stakeholder management across engineering, research, and customers.<br>• Experience with product analytics, data-informed prioritization, and experimentation.<br>• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.</p> <p><strong><span data-contrast="auto"><span data-ccp-charstyle="Strong">It will be an added bonus if you have:</span></span></strong><span data-ccp-props="{"134233117":true,"134233118":true}"> </span></p> <p><span data-ccp-props="{}">• Experience with GPU platforms and HPC primitives: <strong>InfiniBand/RDMA, topology-aware scheduling</strong>, high-throughput storage.<br>• Practical understanding of modern ML training stacks: <strong>PyTorch, DeepSpeed, FSDP/ZeRO, NCCL</strong>.<br>• Familiarity with efficiency and reliability metrics: <strong>Goodput, MFU, failure modes, preemption handling, health checks</strong>.<br>• Exposure to large-scale LLM training/inference systems.<br>• Experience in observability, performance tuning, or SRE/reliability engineering.<br>• Customer-facing technical experience (solutioning, support, architecture advisory).</span></p> <p><strong>About Nebius</strong></p> <p><span data-contrast="auto">Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.</span><span data-ccp-props="{}"> </span></p> <p><span data-contrast="auto">Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023). </span><span data-ccp-props="{}"> </span></p> <p><span data-contrast="auto">Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.</span><span data-ccp-props="{}"> </span></p> <p><span data-contrast="auto">Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day. </span></p><div class="content-conclusion"><p><strong>What we offer</strong> </p> <ul> <li>Competitive salary and comprehensive benefits package.</li> <li>Opportunities for professional growth within Nebius.</li> <li>Flexible working arrangements.</li> <li>A dynamic and collaborative work environment that values initiative and innovation.</li> </ul> <p><span data-contrast="auto">We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!</span></p></div>
Related Searches
Explore more opportunities matching this role's title, location, and skills.
Ready to apply?
Click below to apply directly on Nebius's careers page.
Similar Roles
Get the top 10 hyper-growth roles delivered to your inbox every Tuesday.