Pytorch?和?Tensorflow?v1?兼容的環(huán)境搭建方法
Github 上很多大牛的代碼都是Tensorflow v1 寫(xiě)的,比較新的文章則喜歡用Pytorch,這導(dǎo)致我們復(fù)現(xiàn)實(shí)驗(yàn)或者對(duì)比實(shí)驗(yàn)的時(shí)候需要花費(fèi)大量的時(shí)間在搭建不同的環(huán)境上。這篇文章是我經(jīng)過(guò)反復(fù)實(shí)踐總結(jié)出來(lái)的環(huán)境配置教程,親測(cè)有效!
首先最基本的Python 環(huán)境配置如下:
conda create -n py37 python=3.7
python版本不要設(shè)置得太高也不要太低,3.6~3.7最佳,適用絕大部分代碼庫(kù)。(Tensorflow v1 最高支持的python 版本也只有3.7)
然后是Pytorch 環(huán)境 (因?yàn)樽詈?jiǎn)單省力,哈哈哈)
# ROCM 5.1.1 (Linux only)
pip install torch==1.12.1+rocm5.1.1 torchvision==0.13.1+rocm5.1.1 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/rocm5.1.1
# CUDA 11.6
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
# CUDA 11.3
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
# CUDA 10.2
pip install torch==1.12.1+cu102 torchvision==0.13.1+cu102 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu102
# CPU only
pip install torch==1.12.1+cpu torchvision==0.13.1+cpu torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cpu
推薦使用pip 安裝,用conda 安裝的有時(shí)候會(huì)出現(xiàn)torch 識(shí)別不到GPU 的問(wèn)題....
官網(wǎng)教程鏈接
Previous PyTorch Versions | PyTorch
然后是顯卡相關(guān)的配置, cudatoolkit 和 cudnn. 前面這個(gè)是pytorch 環(huán)境必須具備的包,后面這個(gè)則是tensorflow 要使用GPU所必需的。前面安裝完pytorch 其實(shí)已經(jīng)裝好了一個(gè)cudatoolkit,我的電腦是Cuda 10.2 ,所以現(xiàn)在環(huán)境中已經(jīng)有了一個(gè)cudatookit=10.2的包了,但是Tensorflow v1 最高只支持到 Cuda 10,所以得降低cudatoolkit的版本到10.0 (Cuda 環(huán)境是向下兼容的,我的Cuda 環(huán)境是10.2 但是cudatoolkit=10.0 也一樣能用,這是Tensorflow v1 最高支持的版本,只能妥協(xié)......)
conda install cudatoolkit=10.0
然后裝cudnn
conda install cudnn=7.6.5=cuda10.0_0
亦可使用如下命令搜索你所要的cudnn版本
conda search cudnn
如果conda 下載太慢請(qǐng)切換國(guó)內(nèi)源
http://www.dbjr.com.cn/article/199913.htm
最后把Tensorflow v1裝上
pip install tensorflow-gpu==1.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
推薦的Tensorflow v1 的版本是1.15.0 和1.14.0,其他版本尚未測(cè)試。
最后分別測(cè)試Pytorch 和Tensorflow 能否使用GPU如下:
import torch print(torch.cuda.is_available()
Pytorch 檢測(cè)GPU的方法相信大家都知道,不再贅述。Tensorflow v1 檢測(cè)GPU的方法如下:
from tensorflow.python.client import device_lib print(device_lib.list_local_devices())
如果輸出結(jié)果為:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
則降低protobuf 的版本
pip install protobuf==3.19.6 -i https://pypi.tuna.tsinghua.edu.cn/simple
正確的輸出為:
2022-10-30 21:46:59.982971: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2022-10-30 21:47:00.006072: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3699850000 Hz
2022-10-30 21:47:00.006792: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d1633f2750 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-10-30 21:47:00.006808: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2022-10-30 21:47:00.008473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2022-10-30 21:47:00.105474: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.105762: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d1635c3f60 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-10-30 21:47:00.105784: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce GTX 1080 Ti, Compute Capability 6.1
2022-10-30 21:47:00.105990: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.106166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
2022-10-30 21:47:00.106369: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-10-30 21:47:00.107666: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2022-10-30 21:47:00.108687: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2022-10-30 21:47:00.108929: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2022-10-30 21:47:00.111721: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2022-10-30 21:47:00.112861: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2022-10-30 21:47:00.116688: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2022-10-30 21:47:00.116826: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117018: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-10-30 21:47:00.117170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-10-30 21:47:00.117421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-10-30 21:47:00.117435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2022-10-30 21:47:00.117446: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2022-10-30 21:47:00.117529: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117678: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 10361 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 10409023728072267246
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 7385902535139826165
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 7109357658802926795
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 10864479437
locality {
bus_id: 1
links {
}
}
incarnation: 6537278509263123219
physical_device_desc: "device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1"
]
最關(guān)鍵的地方是你得看到你的GPU 型號(hào),我的是 GTX 1080Ti,檢測(cè)成功!
以上環(huán)境適用絕大多數(shù)深度學(xué)習(xí)模型,希望能幫到你!喜歡請(qǐng)點(diǎn)贊!
完結(jié)!撒花!
參考文獻(xiàn)
Could not load dynamic library 'libcudart.so.10.0' - 知乎
https://medium.com/analytics-vidhya/solution-to-tensorflow-2-not-using-gpu-119fb3e04daa
How to tell if tensorflow is using gpu acceleration from inside python shell? - Stack Overflow
到此這篇關(guān)于搭建Pytorch 和 Tensorflow v1 兼容的環(huán)境的文章就介紹到這了,更多相關(guān)Pytorch 和 Tensorflow建環(huán)境內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
詳解重置Django migration的常見(jiàn)方式
這篇文章主要介紹了詳解重置Django migration的常見(jiàn)方式,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2019-02-02pandas 數(shù)據(jù)索引與選取的實(shí)現(xiàn)方法
這篇文章主要介紹了pandas 數(shù)據(jù)索引與選取的實(shí)現(xiàn)方法,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2019-06-06Python裝飾器簡(jiǎn)單用法實(shí)例小結(jié)
這篇文章主要介紹了Python裝飾器簡(jiǎn)單用法,結(jié)合實(shí)例形式總結(jié)分析了Python裝飾器的基本功能、簡(jiǎn)單用法及相關(guān)操作注意事項(xiàng),需要的朋友可以參考下2018-12-12Python獲取當(dāng)前頁(yè)面內(nèi)所有鏈接的四種方法對(duì)比分析
這篇文章主要介紹了Python獲取當(dāng)前頁(yè)面內(nèi)所有鏈接的方法,結(jié)合實(shí)例形式對(duì)比分析了Python常用的四種獲取頁(yè)面鏈接的方法,并附帶了iframe框架內(nèi)鏈接的獲取方法,需要的朋友可以參考下2017-08-08Python的類(lèi)實(shí)例屬性訪(fǎng)問(wèn)規(guī)則探討
這篇文章主要介紹了Python的類(lèi)實(shí)例屬性訪(fǎng)問(wèn)規(guī)則,本文總結(jié)了一些對(duì)C++和Java程序員來(lái)說(shuō)不是很直觀的地方來(lái)說(shuō)明Python中的類(lèi)實(shí)例屬性訪(fǎng)問(wèn),需要的朋友可以參考下2015-01-01python 實(shí)現(xiàn)兩個(gè)線(xiàn)程交替執(zhí)行
這篇文章主要介紹了python 實(shí)現(xiàn)兩個(gè)線(xiàn)程交替執(zhí)行,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-05-05