cudnn

Version:

8.5.0.96, 8.2.4.15, 8.2.4.15, 8.1.1.33, 8.1.1.33, 7.6.5.32

Category:

numlib

Cluster:

Loki

Author / Distributor

https://developer.nvidia.com/cudnn

Description

NVIDIA cuDNN (CUDA Deep Neural Network library) is a GPU-accelerated library for deep learning primitives. It provides highly optimized implementations for standard operations used in deep neural networks such as:

  • Convolution and deconvolution layers

  • Pooling and normalization

  • Activation functions (ReLU, sigmoid, tanh, etc.)

  • Recurrent Neural Networks (RNNs), including LSTM and GRU

  • Tensor transformations and math operations

cuDNN is designed to integrate with high-level machine learning frameworks like TensorFlow, PyTorch, and MXNet.

This version (8.4.1.50) is compatible with CUDA 11.7 and supports Ampere, Turing, and Volta architectures.

Documentation

cuDNN is a library and does not include command-line binaries.

It is linked at runtime by frameworks such as TensorFlow or PyTorch,
or manually used in C/C++ projects.

Example include & link flags:

    -I$CUDNN_INC -I$CUDA_INC
    -L$CUDNN_LIB -lcudnn

For API reference and developer guide:
  https://docs.nvidia.com/deeplearning/cudnn/api/index.html

Examples/Usage

  • Load CUDA 11.7.0 and cuDNN:

$ module load cuda/11.7.0
$ module load numlib/cuDNN/8.4.1.50-CUDA-11.7.0
  • Verify headers and libraries:

$ ls $CUDNN_HOME/include/cudnn*.h
$ ls $CUDNN_HOME/lib64/libcudnn*
  • Use in a compilation step (example for C++):

$ nvcc -I$CUDNN_HOME/include -L$CUDNN_HOME/lib64 \
  -lcudnn -o cudnn_app cudnn_app.cu
  • Unload the module:

$ module unload numlib/cuDNN/8.4.1.50-CUDA-11.7.0

Installation

cuDNN 8.4.1.50 was installed from the NVIDIA Developer archive: https://developer.nvidia.com/rdp/cudnn-archive