要运行高版本的GPU版TensorFlow,需要更新宿主机的显卡驱动(本文以NVIDIA390为例)
一、更新驱动
禁用nouveau驱动: 添加/etc/modprobe.d/blacklist.conf文件
blacklist nouveau
options nouveau modeset=0 “sudo update-initramfs -u”
执行“lsmod | grep nouveau”,如无变化,则禁用成功
此处不能直接重启,否则进不了系统。
若重启导致无法进入系统,解决方案:https://blog.****.net/wei_supreme/article/details/82227765
添加Graphic Drivers PPA:
“sudo -E add-apt-repository ppa:graphics-drivers/ppa”
“sudo apt-get update” 搜索适合的驱动“sudo ubuntu-drivers devices”
卸载已有驱动 sudo apt-get remove --purge nvidia*
关闭(图形)桌面显示管理器LightDM:“sudo service lightdm stop” 安装驱动:“sudo apt-get install nvidia-384”
执行“sudo apt-get upgrade”,重启sudo reboot
执行“nvidia-smi”即可查看驱动的安装状态显示安装成功
如出现错误:“nvidia-smi has failed because it couldn‘t communicate with the nvidia driver”,请disable系统的security boot即可
重新启动图形环境“sudo service lightdm start”
二、报错:
Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:385: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --utility --pid=11077 /var/lib/docker/overlay2/510a6de5ed82decf7421a392e5274b4fe47e8d0cd3610175c3550f1d26c91376/merged]\\\\nnvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown
说是驱动有问题,第一个想到的就是因为将早先的
nvidia-384
驱动更新到了nvidia-410
可能有问题,再重启之后没有作用,于是尝试通过apt
重新安装nvidia-410
:
$ add-apt-repository ppa:graphics-drivers/ppa
$ apt update
$ apt install nvidia-410
重启后依然发现类似问题,再去搜索发现 https://zhuanlan.zhihu.com/p/37519492 和我遇到的问题类似,通过命令 nvidia-container-cli -k -d /dev/tty info
得到具体的报错:
E0117 08:51:20.843706 12905 driver.c:197] could not start driver service: load library failed: libnvidia-fatbinaryloader.so.384.145: cannot open shared object file: no such file or directory
384
这个驱动版本我明明已经删了,为什么还要找这个库呢?是不是因为新的 410
安装的不全呢?再往后看,提到
安装驱动的时候会自动安装这个libcuda1-384包的,估计是什么历史遗留问题,或者是purge 又install把包的依赖关系搞坏了,因此现在需要重新安装。
立即想到我的 410
是不是也没有安装 libcuda1-410
呢?赶紧 apt search libcuda
发现果然有这么个依赖,apt install libcuda1-410
赶紧安装,再次跑 nvidia-container-cli -k -d /dev/tty info
就一切正常了。
三、报错:ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
解决方案:
进入/usr/lib/nvidia-390
建立软连接:
sudo ln -f -s /usr/lib/x86_64-linux-gnu/libcuda.so.1 libcuda.so.1
四、安装nvidia-docker2
官网安装教程:https://github.com/NVIDIA/nvidia-docker
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker # Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update # Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd # Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
测试是否成功:
docker run -it --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi