我在安装部署Stable Diffusion的时候遇到困难。在执行 python launch.py --listen --lowvram --no-half --skip-torch-cuda-test
报错,好像是 conda 里面pytorch配置有问题,但是我不知道要怎么解决。
附上详细的错误信息:
Python 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0]
Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Cloning Stable Diffusion into repositories/stable-diffusion-stability-ai...
Cloning Taming Transformers into repositories/taming-transformers...
Cloning K-diffusion into repositories/k-diffusion...
Cloning CodeFormer into repositories/CodeFormer...
Cloning BLIP into repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments: --listen --lowvram --no-half --ckpt /root/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors --lora-dir /root/models/Lora
/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Traceback (most recent call last):
File "/root/stable-diffusion-webui/launch.py", line 361, in <module>
start()
File "/root/stable-diffusion-webui/launch.py", line 352, in start
import webui
File "/root/stable-diffusion-webui/webui.py", line 15, in <module>
from modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints
File "/root/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py", line 6, in <module>
from modules import shared, ui_extra_networks, sd_models
File "/root/stable-diffusion-webui/modules/sd_models.py", line 17, in <module>
from modules.sd_hijack_inpainting import do_inpainting_hijack
File "/root/stable-diffusion-webui/modules/sd_hijack_inpainting.py", line 7, in <module>
import ldm.models.diffusion.ddpm
File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 12, in <module>
import pytorch_lightning as pl
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 14, in <module>
from pytorch_lightning.callbacks.callback import Callback
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in <module>
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 28, in <module>
from torchmetrics import Metric
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/__init__.py", line 14, in <module>
from torchmetrics import functional # noqa: E402
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/__init__.py", line 14, in <module>
from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py", line 14, in <module>
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py", line 23, in <module>
from torchmetrics.utilities import rank_zero_warn
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py", line 14, in <module>
from torchmetrics.utilities.checks import check_forward_full_state_property
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/checks.py", line 25, in <module>
from torchmetrics.metric import Metric
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/metric.py", line 30, in <module>
from torchmetrics.utilities.data import (
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/data.py", line 22, in <module>
from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 48, in <module>
_TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/lightning_utilities/core/imports.py", line 73, in compare_version
pkg = importlib.import_module(package)
File "/root/.conda/envs/aigc/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/__init__.py", line 1, in <module>
from torchaudio import ( # noqa: F401
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/_extension/__init__.py", line 43, in <module>
_load_lib("libtorchaudio")
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/_extension/utils.py", line 61, in _load_lib
torch.ops.load_library(path)
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torch/_ops.py", line 643, in load_library
ctypes.CDLL(path)
File "/root/.conda/envs/aigc/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZNK5torch8autograd4Node4nameEv
(aigc) [root@iZuf61kljas3cjo5gb9jauZ stable-diffusion-webui]# python launch.py --listen --lowvram --no-half
Python 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0]
Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Traceback (most recent call last):
File "/root/stable-diffusion-webui/launch.py", line 360, in <module>
prepare_environment()
File "/root/stable-diffusion-webui/launch.py", line 272, in prepare_environment
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
File "/root/stable-diffusion-webui/launch.py", line 129, in run_python
return run(f'"{python}" -c "{code}"', desc, errdesc)
File "/root/stable-diffusion-webui/launch.py", line 105, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/root/.conda/envs/aigc/bin/python" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
(aigc) [root@iZuf61kljas3cjo5gb9jauZ stable-diffusion-webui]# python launch.py --listen --lowvram --no-half --skip-torch-cuda-test
Python 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0]
Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Installing requirements for Web UI
Launching Web UI with arguments: --listen --lowvram --no-half --ckpt /root/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors --lora-dir /root/models/Lora
/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/image.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
/root/.conda/envs/aigc/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Traceback (most recent call last):
File "/root/stable-diffusion-webui/launch.py", line 361, in <module>
start()
File "/root/stable-diffusion-webui/launch.py", line 352, in start
import webui
File "/root/stable-diffusion-webui/webui.py", line 15, in <module>
from modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints
File "/root/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py", line 6, in <module>
from modules import shared, ui_extra_networks, sd_models
File "/root/stable-diffusion-webui/modules/sd_models.py", line 17, in <module>
from modules.sd_hijack_inpainting import do_inpainting_hijack
File "/root/stable-diffusion-webui/modules/sd_hijack_inpainting.py", line 7, in <module>
import ldm.models.diffusion.ddpm
File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 12, in <module>
import pytorch_lightning as pl
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 14, in <module>
from pytorch_lightning.callbacks.callback import Callback
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in <module>
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 28, in <module>
from torchmetrics import Metric
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/__init__.py", line 14, in <module>
from torchmetrics import functional # noqa: E402
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/__init__.py", line 14, in <module>
from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py", line 14, in <module>
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py", line 23, in <module>
from torchmetrics.utilities import rank_zero_warn
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py", line 14, in <module>
from torchmetrics.utilities.checks import check_forward_full_state_property
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/checks.py", line 25, in <module>
from torchmetrics.metric import Metric
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/metric.py", line 30, in <module>
from torchmetrics.utilities.data import (
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/data.py", line 22, in <module>
from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 48, in <module>
_TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/lightning_utilities/core/imports.py", line 73, in compare_version
pkg = importlib.import_module(package)
File "/root/.conda/envs/aigc/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/__init__.py", line 1, in <module>
from torchaudio import ( # noqa: F401
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/_extension/__init__.py", line 43, in <module>
_load_lib("libtorchaudio")
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/_extension/utils.py", line 61, in _load_lib
torch.ops.load_library(path)
File "/root/.conda/envs/aigc/lib/python3.10/site-packages/torch/_ops.py", line 643, in load_library
ctypes.CDLL(path)
File "/root/.conda/envs/aigc/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /root/.conda/envs/aigc/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZNK5torch8autograd4Node4nameEv
根据您提供的信息,可能是由于conda环境中的PyTorch版本与Stable Diffusion所需的版本不兼容导致的。您可以尝试以下方法解决问题:
conda create -n stable_diffusion python=3.8
conda activate stable_diffusion
pip install -r requirements.txt
如果仍然遇到问题,请检查requirements.txt
文件中列出的所有依赖项是否与您的conda环境中的其他库兼容。如果不兼容,您可能需要手动更新或降级这些依赖项的版本。
conda uninstall pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
然后,根据您的系统和CUDA版本,从PyTorch官方网站获取适当的安装命令。例如,对于Windows系统和CUDA 10.2,您可以使用以下命令安装PyTorch:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -c conda-forge
希望这些建议能帮助您解决问题。如果您还有其他问题,请随时提问。
重装torchaudio
先卸载
conda uninstall torchaudio
再重装
conda install torchaudio
可能是由于您的conda环境中的PyTorch版本和CUDA支持存在问题导致的。为了解决这个问题,您可以尝试以下几个步骤:
确保您的conda环境中已经安装了最新版本的PyTorch和CUDA。您可以通过在conda命令行中执行以下命令来检查您的PyTorch和CUDA版本:
conda list | grep torch
conda list | grep cuda
如果您的PyTorch和CUDA版本不是最新的,您可以通过以下命令来更新它们:
conda install pytorch torchvision cudatoolkit=11.0
如果您已经安装了最新版本的PyTorch和CUDA,但是仍然遇到问题,您可以尝试重新编译PyTorch。您可以通过以下命令来编译PyTorch:
torchvision.utils.build torchvision()
如果您仍然无法解决问题,您可以尝试重新安装PyTorch和CUDA,并确保它们的版本和环境变量都正确配置。
在安装和部署 Stable Diffusion 时遇到问题可能是由于 PyTorch 配置或环境问题导致的。下面是一些可能的解决方案:
确保已经正确安装了 conda 和 PyTorch。可以通过运行 conda list
命令来查看已安装的包列表,确认 PyTorch 是否已经正确安装。
如果你尚未安装 PyTorch 或者想重新安装,可以尝试使用 conda 来安装。可以使用以下命令创建一个新的 conda 环境,并在环境中安装 PyTorch:
conda create -n myenv python=3.7
conda activate myenv
conda install pytorch torchvision torchaudio cpuonly -c pytorch
请注意,上述命令假定你希望安装 CPU 版本的 PyTorch。如果你需要 GPU 版本,请根据你的 GPU 类型和 CUDA 版本选择合适的安装命令。
如果你已经正确安装了 PyTorch,但仍然遇到问题,可以尝试更新 PyTorch 到最新版本。使用以下命令更新 PyTorch:
conda update pytorch torchvision torchaudio -c pytorch
如果问题仍然存在,可能是由于其他依赖项或环境配置问题导致的。你可以尝试查看错误消息中的详细信息,并根据错误信息进一步调查和解决。
请注意,以上解决方案提供了一些常见的解决方法,但具体情况可能因操作系统、环境和其他因素而有所不同。如果问题仍然存在,我建议你参考相关文档、社区论坛或与开发者进行沟通,以获取更具体的帮助和支持。
在安装和部署 Stable Diffusion 时遇到问题可能有多种原因。关于您提到的 conda 和 PyTorch 配置问题,以下是一些常见的解决方法:
确认环境:确保您使用的环境符合 Stable Diffusion 的要求。首先,请确认您已经安装了适当版本的 Python 和 conda,并且配置了正确的环境变量。
更新 conda:尝试更新 conda 到最新版本,可以使用以下命令:
conda update conda
清理环境:如果之前安装过其他版本的 PyTorch 或相关依赖项,可能会导致冲突。您可以尝试卸载现有的 PyTorch 及其依赖项,并重新安装 Stable Diffusion 所需的依赖项。
创建虚拟环境:为了避免与现有环境的冲突,您可以创建一个新的虚拟环境,并在其中安装 Stable Diffusion 所需的依赖项。可以使用以下命令创建虚拟环境:
conda create -n stable-diffusion python=3.8
安装 PyTorch:确保您安装了兼容版本的 PyTorch。根据您的系统和硬件配置,可以从 PyTorch 的官方网站或 conda 命令进行安装。例如,对于 CPU 版本可以使用以下命令:
conda install pytorch cpuonly -c pytorch
依赖项解决:在安装 Stable Diffusion 之前,请确保已经安装了它所需的所有依赖项。您可以查看 Stable Diffusion 的文档或 README 文件,找到相关依赖项的列表,并逐个进行安装。
在安装部署 Stable Diffusion 时遇到 conda 中 PyTorch 配置的问题,下面是一些可能的解决方法:
更新 conda:首先,确保您正在使用最新版本的 conda。在命令行中运行以下命令可以更新 conda:
conda update -n base -c defaults conda
创建新的 conda 环境:尝试创建一个新的 conda 环境,并在该环境中安装 Stable Diffusion 和相应的依赖项。您可以使用以下命令创建新环境:
conda create -n sd-env python=3.7
conda activate sd-env
安装 PyTorch:在新的 conda 环境中,尝试使用 conda 或 pip 安装正确版本的 PyTorch。例如,使用以下命令安装 PyTorch CPU 版本:
conda install pytorch cpuonly -c pytorch
检查 CUDA 配置:如果您的系统具有 NVIDIA GPU 并要使用 GPU 加速,确保已正确配置 CUDA。请确认您的 GPU 驱动程序已正确安装,并且与所安装的 PyTorch 版本兼容。
运行脚本:在完成以上步骤后,再次运行 python launch.py --listen --lowvram --no-half --skip-torch-cuda-test
命令,看是否还会出现错误。
如果在安装部署 Stable Diffusion 时遇到了问题,可能是由于 pytorch 配置出现了问题。以下是一些可能的解决步骤:
检查 pytorch 是否安装成功:在执行 python launch.py 命令之前,请确保已经成功安装了 pytorch,并且 pytorch 版本与 Stable Diffusion 要求的版本一致。可以尝试在终端中输入 python 进入 Python 解释器,然后输入 import torch,查看是否有报错提示。
确认 conda 环境正确:在执行 python launch.py 命令之前,需要确保当前的 conda 环境已经正确激活。可以尝试在终端中输入 conda activate your_env,将 your_env 替换为正确的 conda 环境名称,查看是否激活成功。
更新 pytorch 和 torchvision:如果已经安装了 pytorch 和 torchvision,可以尝试更新这两个包到最新版本,以确保与 Stable Diffusion 兼容。可以使用 conda update pytorch torchvision 命令进行更新。
检查 CUDA 版本:如果您的系统上安装了 CUDA,需要确保 CUDA 版本与 Stable Diffusion 支持的 CUDA 版本一致。可以尝试在终端中输入 nvcc --version,查看 CUDA 版本信息。
检查 GPU 驱动程序:如果您的系统上安装了 GPU 驱动程序,请确保驱动程序版本与 Stable Diffusion 支持的版本一致。可以尝试在终端中输入 nvidia-smi,查看 GPU 驱动程序版本信息
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
人工智能平台 PAI(Platform for AI,原机器学习平台PAI)是面向开发者和企业的机器学习/深度学习工程平台,提供包含数据标注、模型构建、模型训练、模型部署、推理优化在内的AI开发全链路服务,内置140+种优化算法,具备丰富的行业场景插件,为用户提供低门槛、高性能的云原生AI工程化能力。