使用pytorch时安装cuda

  • Post author:
  • Post category:其他




基本情况

pytorch有基于cpu和gpu运行的,gpu时需要cuda。所以需要再装cuda。

如果没安装cuda,则运行下面的模型加载时会报错:

model = torch.load(model_path)

报错:

  File "test.py", line 43, in __init__
    model = torch.load(model_path)
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 607, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 882, in _load
    result = unpickler.load()
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 857, in persistent_load
    load_tensor(data_type, size, key, _maybe_decode_ascii(location))
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 846, in load_tensor
    loaded_storages[key] = restore_location(storage, location)
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
    result = fn(storage, location)
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "C:\Users\86137\Anaconda3\envs\Pytorch\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

改成下面这样可以,就是使用cpu

model = torch.load(model_path,map_location='cpu')

还有就是要装cuda。方法:

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

后面的url就是指明了cuda 11.3的下载地址。

测试是否支持cuda,方法为运行:

import torch
 
print(torch.__version__)  # 注意是双下划线
print(torch.version.cuda)
print(torch.cuda.is_available())
print(torch.cuda.get_device_name())

参考:


https://blog.csdn.net/Z_zfer/article/details/128978110


https://blog.csdn.net/L802380230/article/details/122489397


https://blog.csdn.net/weixin_42042072/article/details/125801302



版权声明:本文为kevinshift原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。