Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 813 159

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 434 75

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.9k 1.7k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 245

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.3k 510

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.9k 1k

Repositories

Showing 10 of 710 repositories
  • Megatron-LM Public

    Ongoing research training transformer models at scale

    NVIDIA/Megatron-LM’s past year of commit activity
    Python 16,013 3,817 345 (1 issue needs help) 370 Updated Apr 13, 2026
  • nova Public Forked from torvalds/linux

    Linux kernel source tree

    NVIDIA/nova’s past year of commit activity
    C 12 64,592 0 5 Updated Apr 13, 2026
  • cuda-python Public

    CUDA Python: Performance meets Productivity

    NVIDIA/cuda-python’s past year of commit activity
    Cython 3,216 270 191 19 Updated Apr 13, 2026
  • NVSentinel Public

    NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated computing environments

    NVIDIA/NVSentinel’s past year of commit activity
    Go 252 Apache-2.0 67 30 36 Updated Apr 13, 2026
  • ncx-infra-controller-core Public

    NCX Infra Controller - Hardware Lifecycle Management and multitenant networking

    NVIDIA/ncx-infra-controller-core’s past year of commit activity
    Rust 116 Apache-2.0 77 143 (5 issues need help) 63 Updated Apr 13, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 13,350 2,276 574 700 Updated Apr 13, 2026
  • mig-parted Public

    MIG Partition Editor for NVIDIA GPUs

    NVIDIA/mig-parted’s past year of commit activity
    Go 245 Apache-2.0 59 10 8 Updated Apr 13, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 2,449 Apache-2.0 350 61 130 Updated Apr 13, 2026
  • terraform-provider-ngc Public

    The NGC Provider enables Terraform to manage NGC (NVIDIA GPU Cloud) resources.

    NVIDIA/terraform-provider-ngc’s past year of commit activity
    Go 9 Apache-2.0 5 0 6 Updated Apr 13, 2026
  • NemoClaw Public

    Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

    NVIDIA/NemoClaw’s past year of commit activity
    TypeScript 19,078 Apache-2.0 2,334 272 233 Updated Apr 13, 2026

Top languages

Loading…

Most used topics

Loading…