How to install cudnn
How to install cudnn
NVIDIA cuDNN
Fine-Tune GPU Performance for Neural Nets
Related Articles
What Is NVIDIA cuDNN?
NVIDIA CUDA Deep Neural Network (cuDNN) is a GPU-accelerated primitive library for deep neural networks, providing highly-tuned standard routine implementations, including normalization, pooling, back-and-forth convolution, and activation layers.
The cuDNN library allows deep learning framework developers and researchers everywhere to leverage GPU acceleration for high performance. It reduces the need to fine-tune GPU performance at a low level, saving time so you can concentrate on developing your software and training your neural networks. cuDNN acceleration supports popular deep learning frameworks such as Keras, Caffe2, Chainer, MxNet, MATLAB, TensorFlow, and PyTorch.В
In this article:
NVIDIA cuDNN Features
Key features of NVIDIA cuDNN include:
cuDNN enjoys support from Linux and Windows with a variety of mobile GPU and data center architectures, including Ampere, Volta, Turing, Pascal, Kepler, and Maxwell. The latest version of cuDNN is 8.3, which provides improved performance with A100 GPUs (up to five times higher than out-of-the-box V100 GPUs).It also offers new APIs and optimizations for computer vision and conversational AI applications.В
The version 8.3 redesign is user-friendly and offers improved flexibility and easy application integration. It includes optimizations to accelerate transformer-based deep learning models, runtime fusion for compiling kernels with new operators, and a smaller download package (reduced by 30%).
cuDNN Programming Model
NVIDIA cuDNN offers highly-tuned, optimized implementations of common routines for DNN applications. These convolution routines include:
The cuDNN routines offer competitive performance with fast, matrix multiply (GEMM)-based implementations that use less memory. Features of cuDNN include:
The flexibility of cuDNN means you can integrate it into all neural network implementations while avoiding the steps for input/output transposition often required for GEMM-based convolutions. The cuDNN library assumes that the required data for GPU-based operations is directly accessible to the device while also exposing a host API.
Applications using the cuDNN library must call cudnnCreate() to initialize a library context handle. They explicitly pass the handle to each library function operating on GPU data. When an application has finished using cuDNN, it can use the cudnnDestroy() command to release any resources associated with it. Users can control the functioning of the library for multiple GPUs, host threads, and CUDA streams.
For instance, applications can associate specific devices with specific host threads using the cudaSetDevice command. They can use unique cuDNN handles for each host thread, which direct library calls to the associated device. If you make your cuDNN library calls using different handles, they automatically run on the different devices you specified.
The system assumes that any device associated with specific cuDNN contexts remains unchanged from their creation to their destruction (corresponding with cudnnCreate() and cudnnDestroy() calls). If you want the cuDNN library to use another device in the same host thread, you need the application to call cudnnDestroy()to set up the other device. The application must create a new cuDNN context and call cudnnCreate() to associate it with the device.
Related content: Read our guide to CUDA programming
Installing cuDNN on Windows
Prerequisites
Before you download cuDNN, make sure you have the following installed on your Windows computer:
Download cuDNN for Windows
To download cuDNN, you must register for the NVIDIA Developer Program:
Install cuDNNВ
Before you issue any commands, you must specify your chosen versions of CUDA and cuDNN (and the package date) in the x.x and 8.x.x.x fields. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x is the CUDA directory path, while is the cuDNN directory path.
Use the following steps:
Installing cuDNN On Linux
Prerequisites
Before you download cuDNN, make sure you have the following installed on your Linux machine:
Downloading cuDNN For Linux
Before downloading cuDNN register for the NVIDIA Developer Program. Then do the following:
Installing On Linux
Note that the installation packages for cuDNN are available online. The executable you downloaded is a package manager that automatically downloads and installs them.
To install cuDNN on Ubuntu 18.04 and 20.04:
sudo mv cuda-$
sudo add-apt-repository «deb https://developer.download.nvidia.com/compute/cuda/repos/$
sudo apt-get install libcudnn8=$
To install cuDNN on RHEL7 and RHEL8:
В В В sudo yum clean all
sudo yum install
GPU Virtualization with Run:AI
Run:AI automates resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can automatically run as many compute intensive experiments as needed on NVIDIA infrastructure.
Here are some of the capabilities you gain when using Run:AI:В
Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their models.В
Learn more about the Run:AI GPU virtualization platform.
Install CUDA and CUDNN on Windows & Linux
CONTENTS
( But first ✅Subscribe to my YouTube channel 👉🏻 https://bit.ly/3Ap3sdi 😁😜)
CUDA & CUDNN FOR WINDOWS
STEP 1) Download and install CUDA Toolkit
● Go to https://developer.nvidia.com/cuda-downloads to download the latest CUDA Toolkit.
You can also download previous versions from Archive of Previous CUDA Releases OR under the Resources section at the cuda-downloads link given above.
( For this Tutorial, I will download and install CUDA 11.0. You can the latest CUDA toolkit and its corresponding cuDNN file. This is just for demonstration )
Go to Archive of Previous CUDA Releases and click on CUDA Toolkit 11.0 Update1.
● On the next page, first, choose your operating system as Windows.
● Next, choose any installer type to download. I have downloaded the exe (local) type installer.
● You will then see the installation instructions using the base installer which is 2.7 GB in size. Once downloaded, click on the exe file and follow on-screen prompts.
● When the setup starts, select a location to extract the installer. Once that is done the CUDA installer will start. Over there, choose Express installation and click on Next. This will install the CUDA Toolkit on your system in the location C:\Program Files\NVIDIA GPU Computing Toolkit.
● Next, make sure to check if your Environment variables have the path to CUDA as shown in the image. (It should automatically add the second path) If it doesn’t then manually add them to the System variables.
STEP 2) Download and setup CUDNN
● Go to https://developer.nvidia.com/cudnn to download the latest version of CUDNN for the latest CUDA toolkit version OR go to https://developer.nvidia.com/rdp/cudnn-archive to download a previous version of CUDNN that is compatible with your CUDA toolkit.
NOTE : You have to be signed in using your Nvidia account to download CUDNN. If you do not have an account, create one.
Since I have CUDA 11.0.1, I will download cuDNN 8.0.5 for CUDA 11.0
● Click on cuDNN 8.0.5 and select cuDNN Library for Windows (x86).
● Next, it will ask you to log in. Create an Nvidia account or sign-in using Google or Facebook. Once logged in you can download the cuDNN archive. Download and extract it.
● Copy the contents of the cuda folder inside the cuDNN folder to the path where we installed CUDA in step 1 above. (We need the contents of the bin, include & lib folders from cuDNN to be inside the bin, include and lib folders of the CUDA directory)
COPY CONTENTS FROM THIS FOLDER
TO THIS FOLDER
● Finally, just like we did for CUDA, we have to set Environment variables for cuDNN as well. See pic below.
● Set System variable with the name CUDNN to point to the bin, include and lib folders which we copied into the CUDA directory. Also, add these same paths to the Path System variable.
● The paths to add are mentioned below:
NOTE: Make sure to add these paths to both the CUDNN and Path System variables.
● Finally, reboot the system. You can verify your CUDA installation through command prompt by running the following 2 commands:
That’s it. We have successfully set up CUDA and cuDNN on our Windows System.
CUDA & CUDNN FOR LINUX
STEP 1) Download and install CUDA Toolkit
Go to https://developer.nvidia.com/cuda-downloads to download the latest CUDA Toolkit.
You can also download previous versions from Archive of Previous CUDA Releases OR under the Resources section at the cuda-downloads link given above.
(For this Tutorial, I will download and install CUDA 11.0. You can the latest CUDA toolkit and its corresponding cuDNN file. This is just for demonstration.)
Go to Archive of Previous CUDA Releases and click on CUDA Toolkit 11.0 Update1.
LET’S BEGIN
NOTE: You can either do the following step to manually install any specific NVidia driver version you want or you can skip this step and simply install the NVidia driver that is bundled with the CUDA software.
(You can read about the minimum driver versions required here on the links given below.)
You’ll see a summary at the end of CUDA installation as shown below.
The bashrc file looks like below:
Press Ctrl + x, y and Enter to save changes.
STEP 2) Download and setup CUDNN
Go to https://developer.nvidia.com/cudnn to download the latest version of CUDNN for the latest CUDA toolkit version OR go to https://developer.nvidia.com/rdp/cudnn-archive to download a previous version of CUDNN that is compatible with your CUDA toolkit.
NOTE: You have to be signed in using your Nvidia account to download CUDNN. If you do not have an account, create one.
Since I have CUDA 11.0.1, I will download cuDNN 8.0.5 for CUDA 11.0
COPY CONTENTS FROM THE CUDA FOLDER UNZIPPED FROM CUDNN TO THE MAIN CUDA DIRECTORY
That’s it. We have successfully set up CUDA and cuDNN on our Linux Ubuntu 18.04 system.
kmhofmann / installing_nvidia_driver_cuda_cudnn_linux.md
Installing the NVIDIA driver, CUDA and cuDNN on Linux (Ubuntu 20.04)
This is a companion piece to my instructions on building TensorFlow from source. In particular, the aim is to install the following pieces of software
on an Ubuntu Linux system, in particular Ubuntu 20.04.
At the time of writing (2020-08-06), these were the latest available versions. As a disclaimer, please note that I am not interested in running an outdated Ubuntu version or installing a CUDA/cuDNN version that is not the latest. Therefore, the below instructions may or may not be useful to you. Please also note that the instructions are likely outdated, since I only update them occasionally. Don’t just copy these instructions, but check what the respective latest versions are and use these instead!
Installing the NVIDIA driver
Download and install the latest NVIDIA graphics driver from here: https://www.nvidia.com/en-us/drivers/unix/. Note that every CUDA version requires a minimum version of the driver; check this beforehand.
Ubuntu 20.04 currently offers installation of the NVIDIA driver version 440.100 through its built-in ‘Additional Drivers’ mechanism, which should be sufficient for CUDA 10.2. CUDA 11.0 appears to require a newer version of the NVIDIA driver, so we’re going to install this manually.
Download and install the latest NVIDIA graphics driver from here: https://www.nvidia.com/en-us/drivers/unix/.
The CUDA runfile also includes a version of the NVIDIA graphics driver, but I like to separate installing either, as the version supplied with CUDA Is not necessarily the latest version of the driver.
Download the latest CUDA version here. For example, I downloaded:
Thankfully, CUDA 11 currently supports the up-to-date Ubuntu version, 20.04, so we don’t need to jump through hoops to deal with an unsupported GNU version error as in previous versions of this document. Simply install as per the official instructions:
You may need to confirm that the display driver is already installed, and de-select installation of the display driver.
Once finished, you should see a summary like this:
Just go here and follow the instructions. You’ll have to log in, so downloading of the right cuDNN binary packages cannot be easily automated. Meh.
Once downloaded, un-tar the file and copy the contents to their respective locations:
bgyarbro commented Aug 10, 2020
Thank you for this tutorial! This is awesome info. I was able to get it setup easily.
PSS67 commented Sep 12, 2020
Thanks. Do you know if this will work on WSL2 (with Ubuntu 20.04)?
prikmm commented Nov 18, 2020 •
Update:
Hey, I downloaded using package manager. Everything went great and i was able to use tensorflow on gpu. But, while running ldconfig, I see the following error:
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link
Among all the symlinks I got as my ouput, I saw:
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so.8
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so.8.0.5
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so.8
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so.8.0.5
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so.8
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so.8.0.5
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so.8
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so.8.0.5
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so.8
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so.8.0.5
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so.8
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so.8.0.5
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so.8
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so.8.0.5
Now, I don’t know what to do whether to generate symlinks or remove libcudnn* files from /usr/local/cuda-11.0/targets/x86_64-linux/lib.
Kindly help me.
Thank you in advance 🙂
PS: If I have to create symlinks, then it would be helpful if I can get an example using one of the ones that have to be created, I just started using linux and am not to familiar with it. 🙂
aloizo03 commented Nov 23, 2020
emenshoff commented Dec 4, 2020
It works fine, but I, personaly was not able to build working version of tensorflow in Uabuntu 18.04 with cuda 11
johndpope commented Dec 18, 2020 •
CHECK LATEST CUDNN versions on https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
sudo apt-get install libcudnn8=8.0.5.39-1+cuda11.1
sudo apt-get install libcudnn8-dev=8.0.5.39-1+cuda11.1
saidmithilesh commented Jan 22, 2021
aloizo03 commented Jan 22, 2021
Thanks for the help i solve this problem 2 months ago the issue was at the nvidia driver and the cuda version
kyleawayan commented Feb 19, 2021
hrithikppawar commented Mar 8, 2021
Hello!
I am going to start a project on Object Detection so I want to use the Tensorflow framework but can Tesorflow supports Cuda-11.0 or I need to install any other version on Cuda.
Can anyone brief me about how I should set up my development environment? I am using Ubuntu-20.10 with Nvidia’s GPU.
SakibFarhad commented Mar 9, 2021
Hello!
I am going to start a project on Object Detection so I want to use the Tensorflow framework but can Tesorflow supports Cuda-11.0 or I need to install any other version on Cuda.
Can anyone brief me about how I should set up my development environment? I am using Ubuntu-20.10 with Nvidia’s GPU.
You can use cuda-11.0, It is supported now as per this https://www.tensorflow.org/install/source#gpu
hrithikppawar commented Mar 9, 2021
Hello!
I am going to start a project on Object Detection so I want to use the Tensorflow framework but can Tesorflow supports Cuda-11.0 or I need to install any other version on Cuda.
Can anyone brief me about how I should set up my development environment? I am using Ubuntu-20.10 with Nvidia’s GPU.
Thank you for your response!
I successfully installed the cuda-11.0 and it is working great with tensorflow.
I think the best configuration is:
This worked for me
tyuvraj commented May 22, 2021 •
Update:
Hey, I downloaded using package manager. Everything went great and i was able to use tensorflow on gpu. But, while running ldconfig, I see the following error:
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link /sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link
Among all the symlinks I got as my ouput, I saw:
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so.8
-rwxr-xr-x 1 0 0 98957080 Nov 18 13:54 libcudnn_adv_infer.so.8.0.5
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so.8
-rwxr-xr-x 1 0 0 65344120 Nov 18 13:54 libcudnn_adv_train.so.8.0.5
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so.8
-rwxr-xr-x 1 0 0 1288305728 Nov 18 13:55 libcudnn_cnn_infer.so.8.0.5
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so.8
-rwxr-xr-x 1 0 0 58705816 Nov 18 13:55 libcudnn_cnn_train.so.8.0.5
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so.8
-rwxr-xr-x 1 0 0 251390696 Nov 18 13:55 libcudnn_ops_infer.so.8.0.5
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so.8
-rwxr-xr-x 1 0 0 26002104 Nov 18 13:55 libcudnn_ops_train.so.8.0.5
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so.8
-rwxr-xr-x 1 0 0 158264 Nov 18 13:55 libcudnn.so.8.0.5
Now, I don’t know what to do whether to generate symlinks or remove libcudnn* files from /usr/local/cuda-11.0/targets/x86_64-linux/lib.
Kindly help me.
Thank you in advance 🙂
PS: If I have to create symlinks, then it would be helpful if I can get an example using one of the ones that have to be created, I just started using linux and am not to familiar with it. 🙂
How to Install NVIDIA cuDNN on Ubuntu 16.04/18.04 Linux
This post will guide you how to install cuDNN on your Ubuntu Linux server. How do I install the latest version of cuDNN and check for correct operation of NIDIA cuDNN on Ubuntu 16.04 or 18.04 Linux.
What is cuDNN?
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK. It can be used for high-performance GPU acceleration. cuDNN accelerates widely used deep learning frameworks, including Caffe, Caffe2, TensorFlow, Theano, Torch, PyTorch, MXNet, and Microsoft Cognitive Toolkit. cuDNN is freely available to members of the NVIDIA Developer Program.
Prerequisties
Before installing cuDNN tool, you need to make sure that your system match the following requirements:
NVIDIA graphics driver R410 or newer for CUDA 10.0
NVIDIA graphics driver R396 or newer for CUDA 9.2
NVIDIA graphics driver R384 or newer for CUDA 9
NVIDIA graphics driver R375 or newer for CUDA 8
So you also need to make sure that the latest NVIDIA graphics driver and CUDA Toolkit are installed on your system.
Installing cuDNN
Step1: Installing the Latest NVIDIA Graphics Drivers
You need to install the latest NVIDIA graphics drivers on your system based on your actual NVIDIA hardware before installing cuDNN, just do the following steps:
#1 you can go to the official download web page of NVIDIA to download the latest NVIDIA graphics driver.
#2 select Product Type and Operating System from the drop down menu list. click SEARCH button.
#3 clicking on Download button to download the driver to your local disk. or you can use the wget command to get the driver file. type:
#4 installing the downloaded NVIDIA graphics driver with the following command:
#5 after installed drivers, you need to reatart your system to ensure the NVIDIA graphics driver takes effect.
Step2: Installing CUDA Toolkit
Before installing cuDNN, you also need to install CUDA Toolkit on your system, and we have explained that how to install CUDA toolkit on Ubuntu system in the previous post.
Step3: Downloading cuDNN
To download cuDNN to your local disk, you need to do the following steps:
#1 you need to register for the NVIDIA Developer Program firstly.
#2 go the NVIDIA cuDNN home page, click Download button, and then you need to complete the short survey and click Submit.
#3 you should see a list of available download versions of cuDNN displays, you need to choose one based on CUDA version installed on your system. For example, if you have installed CUDA 10.1 on your Ubuntu system, you should choose the first one.
#4 You can download a tar file or deb files with the following commands:
Step 4: Installing cuDNN from a Tar File
If you are using a Linux system, such as: CentOS or Ubuntu Linux, you can try to install cuDNN tool from a tar file, just do the following steps:
#1 extract all files from cuDNN tar package with the following steps:
#2 you need to copy the following files into the CUDA Toolkit directory with the following commands:
#3 change the file permissions for those files, type:
Step 5: Installing cuDNN from a Deb File
If you are using Debian or Ubuntu Linux, you can use dpkg command to install those above deb files, type:
Note: the first command will install the Runtime library, the second command will install the developer library, and the third command will install the code samples and the cuDNN Libray User Guide.
Conclusion
You should know that how to install cuDNN tool on Ubuntu 16.04 or 18.04 from this guide, and you also know how to install cuDNN with the different methods on Ubuntu Linux server(tar file or deb file). If you want to see more information about cuDNN, you can go the official web site of cuDNN directly.
How to install Cuda 10.2, cuDNN 7.6.5 and samples on ubuntu 18.04
Section 1 — Clean remaining files
Deleting any NVIDIA/CUDA packages you may already have installed
Deleting any remaining Cuda files on /usr/local/
Purge any remaining NVIDIA configuration files
Purge and remove
updating and deleting unnecessary dependencies.
Section 2 — Install Cuda
Section 3 — Add Cuda to path
Open ‘.profile’ file
and add these lines
Section 4 — Check Nvidia driver and Cuda
Now reboot your computer and check Nvidia driver and Cuda.
For checking Nvidia driver
For checking Cuda version
If everything works we can make a Cuda file and run it.
and add these lines to this file.
Next, use nvcc the Nvidia CUDA compiler to compile the code and run the newly compiled binary:
Section 6 — Download cuDNN files
Note : You need to login. If these links not work you can download from here
Section 7 — Install cuDNN
Note: If you want to download another version of cuDNN you can go https://developer.nvidia.com/rdp/cudnn-download
Section 8 — Verifying the cuDNN
Go to the MNIST example code
If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following:
Источники информации:
- http://techzizou.com/install-cuda-and-cudnn-on-windows-and-linux/
- http://gist.github.com/kmhofmann/cee7c0053da8cc09d62d74a6a4c1c5e4
- http://www.osetc.com/en/how-to-install-nvidia-cudnn-on-ubuntu-16-04-18-04-linux.html
- http://medium.com/@anarmammadli/how-to-install-cuda-10-2-cudnn-7-6-5-and-samples-on-ubuntu-18-04-2493124478ca