Is there an existing issue for this?
What happened?
Title. Although this doesn't cause inability to use the rest of the application, it does prevent token counters being used until the webui is restarted.
Steps to reproduce the problem
- Enter a prompt, note that token counters work correctly
- Generate an image with hires fix, with the LDSR upscaler
- After image is generated, modify the prompt, and observe the error thrown in the console and the UI
What should have happened?
The LDSR upscaler should not break the behavior of the prompt token counters.
Sysinfo
sysinfo.txt
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
File "Q:\AI\venv\a1111\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "Q:\AI\venv\a1111\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "Q:\AI\venv\a1111\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "Q:\AI\venv\a1111\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "Q:\AI\venv\a1111\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "Q:\AI\venv\a1111\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "Q:\AI\venv\a1111\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "Q:\AI\git\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "Q:\AI\git\stable-diffusion-webui\modules\ui.py", line 168, in update_token_counter
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "Q:\AI\git\stable-diffusion-webui\modules\ui.py", line 168, in <listcomp>
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "Q:\AI\git\stable-diffusion-webui\modules\sd_hijack.py", line 301, in get_prompt_lengths
_, token_count = self.clip.process_texts([text])
File "Q:\AI\venv\a1111\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Identity' object has no attribute 'process_texts'
Additional information
#9466 was similar to this issue but it was glossed over because OP was actually running with --ui-debug-mode. That is not the case here however.
Is there an existing issue for this?
What happened?
Title. Although this doesn't cause inability to use the rest of the application, it does prevent token counters being used until the webui is restarted.
Steps to reproduce the problem
What should have happened?
The LDSR upscaler should not break the behavior of the prompt token counters.
Sysinfo
sysinfo.txt
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
Additional information
#9466 was similar to this issue but it was glossed over because OP was actually running with
--ui-debug-mode. That is not the case here however.