JobsMachine Learning Applications and Compiler Engineer, LPX - New College Grad 2026
Machine Learning Applications and Compiler Engineer, LPX - New College Grad 2026
NVIDIAMachine Learning Applications and Compiler Engineer, LPX - New College Grad 2026
NVIDIALocation
Santa Clara, CA
Type
Full-time
Posted
5/10/2026
Compensation
$124,000 - $241,500 per year
Master's Entry-Level
PhD Entry-Level
Approval 99.2%·Filings 1,781·New hires 873·
👑 Elite Sponsor
·FY 2025Job description
NVIDIA is seeking engineers to develop algorithms and optimizations for their LPX inference and compiler stack. This role focuses on the intersection of large-scale systems, compilers, and deep learning, aiming to optimize neural network workloads for future NVIDIA platforms. The team is dedicated to building high-performance runtime and compiler components, enhancing the efficiency of inference processes. This is an exciting opportunity to contribute to innovative technologies in visual and AI computing.
Requirements
- Pursuing or recently completed a MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience.
- Possess a software engineering background with familiarity in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.
- Hands-on experience with compiler or runtime development, including IR design, optimization passes, or code generation.
- Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.
- Familiarity with deep learning frameworks such as TensorFlow and PyTorch, and experience working with portable graph formats such as ONNX.
- Understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.
- Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.
- Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.
Responsibilities
- Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.
- Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.
- Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.
- Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.
- Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.
- Prototype and evaluate new compilation and runtime techniques, including graph transformations, scheduling strategies, and memory/layout optimizations tailored to spatial processors.
- Publish and present technical work on novel compilation approaches for inference and related spatial accelerators at top tier ML, compiler, and computer architecture venues.
Benefits
- Employees at NVIDIA are often offered comprehensive, day-one benefits—including medical, dental, and vision coverage with HSA support, life and disability insurance, an Employee Assistance Program, and a 401(k) with auto-enrollment. Many roles also have generous time off and holidays, donation matching (up to $10,000), and a wide menu of extras like FSAs, commuter benefits, legal and identity-theft protection, pet insurance, and wellness discounts. Optional programs can include student-loan and home-purchase support, plus family care resources and expert medical services.
Is this posting expired or inaccurate?
