You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
venv "C:\A1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-31-ga31ef086
Commit hash: a31ef08
ROCm: agents=['gfx1036', 'gfx1032']
ROCm: version=6.2, using agent gfx1032
ZLUDA support: experimental
Using ZLUDA in C:\A1111\stable-diffusion-webui-directml.zluda
WARNING: you should not skip torch test unless you want CPU to work.
No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.2'
Skipping onnxruntime installation.
You are up to date with the most recent release.
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --update-check --skip-ort
ZLUDA device failed to pass basic operation test: index=None, device_name=AMD Radeon RX 6600 XT [ZLUDA]
CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Loading weights [6ce0161689] from C:\A1111\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\A1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 10.8s (prepare environment: 13.8s, initialize shared: 0.6s, other imports: 0.4s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).
creating model quickly: OSError
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
resolved_file = hf_hub_download(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 961, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1068, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1596, in _raise_on_head_call_error
raise head_call_error
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1401, in get_hf_file_metadata
r = _request_wrapper(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 285, in _request_wrapper
response = _request_wrapper(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 309, in _request_wrapper
hf_raise_for_status(response)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67f3a7fe-338a59d16dbd9e3a6cc1ca90;c0f7be6a-e7b1-437f-b963-e71ce76ad5aa)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\A1111\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(**params)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in init
self.transformer = CLIPTextModel.from_pretrained(version)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
return func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
resolved_config_file = cached_file(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>
Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\A1111\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 871, in load_model
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text
embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
return F.embedding(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Stable diffusion model failed to load
Exception in thread MemMon:
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 43, in run
torch.cuda.reset_peak_memory_stats()
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 309, in reset_peak_memory_stats
return torch._C._cuda_resetPeakMemoryStats(device)
RuntimeError: invalid argument to reset_peak_memory_stats
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
*** Error completing request
*** Arguments: ('task(09paqnhumqgc5qw)', <gradio.routes.Request object at 0x00000225259D54B0>, 'CAT', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\A1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "C:\A1111\stable-diffusion-webui-directml\modules\processing.py", line 1007, in process_images_inner
model_hijack.embedding_db.load_textual_inversion_embeddings()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
return F.embedding(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 104, in f
mem_stats = {k: -(v//-(10241024)) for k, v in shared.mem_mon.stop().items()}
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 99, in stop
return self.read()
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 81, in read
torch_stats = torch.cuda.memory_stats(self.device)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 258, in memory_stats stats = memory_stats_as_nested_dict(device=device)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 270, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: invalid argument to memory_allocated
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
venv
"C:\A1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-31-ga31ef086
Commit hash: a31ef08
ROCm: agents=['gfx1036', 'gfx1032']
ROCm: version=6.2, using agent gfx1032
ZLUDA support: experimental
Using ZLUDA in C:\A1111\stable-diffusion-webui-directml.zluda
WARNING: you should not skip torch test unless you want CPU to work.
No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\6.2'
Skipping onnxruntime installation.
You are up to date with the most recent release.
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning:
pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it frompytorch_lightning.utilities
instead.rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --update-check --skip-ort
ZLUDA device failed to pass basic operation test: index=None, device_name=AMD Radeon RX 6600 XT [ZLUDA]
CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Loading weights [6ce0161689] from C:\A1111\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\A1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Startup time: 10.8s (prepare environment: 13.8s, initialize shared: 0.6s, other imports: 0.4s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).
creating model quickly: OSError
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 342, in cached_file
resolved_file = hf_hub_download(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 961, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1068, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1596, in _raise_on_head_call_error
raise head_call_error
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1401, in get_hf_file_metadata
r = _request_wrapper(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 285, in _request_wrapper
response = _request_wrapper(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 309, in _request_wrapper
hf_raise_for_status(response)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils_http.py", line 459, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67f3a7fe-338a59d16dbd9e3a6cc1ca90;c0f7be6a-e7b1-437f-b963-e71ce76ad5aa)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct
repo_id
andrepo_type
.If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
Invalid username or password.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\A1111\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(**params)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\A1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in init
self.transformer = CLIPTextModel.from_pretrained(version)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 262, in _wrapper
return func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 3540, in from_pretrained
resolved_config_file = cached_file(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 365, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with
huggingface-cli login
or by passingtoken=<your_token>
Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\A1111\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_models.py", line 871, in load_model
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text
embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
return F.embedding(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Stable diffusion model failed to load
Exception in thread MemMon:
Traceback (most recent call last):
File "C:\Users\Bo_Boo\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 43, in run
torch.cuda.reset_peak_memory_stats()
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 309, in reset_peak_memory_stats
return torch._C._cuda_resetPeakMemoryStats(device)
RuntimeError: invalid argument to reset_peak_memory_stats
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
*** Error completing request
*** Arguments: ('task(09paqnhumqgc5qw)', <gradio.routes.Request object at 0x00000225259D54B0>, 'CAT', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\A1111\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
res = process_images_inner(p)
File "C:\A1111\stable-diffusion-webui-directml\modules\processing.py", line 1007, in process_images_inner
model_hijack.embedding_db.load_textual_inversion_embeddings()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 228, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
File "C:\A1111\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
File "C:\A1111\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 365, in encode_embedding_init_text embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
return F.embedding(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Traceback (most recent call last):
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, **kwargs)
File "C:\A1111\stable-diffusion-webui-directml\modules\call_queue.py", line 104, in f
mem_stats = {k: -(v//-(10241024)) for k, v in shared.mem_mon.stop().items()}
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 99, in stop
return self.read()
File "C:\A1111\stable-diffusion-webui-directml\modules\memmon.py", line 81, in read
torch_stats = torch.cuda.memory_stats(self.device)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 258, in memory_stats stats = memory_stats_as_nested_dict(device=device)
File "C:\A1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda\memory.py", line 270, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: invalid argument to
memory_allocated
Beta Was this translation helpful? Give feedback.
All reactions