Skip to content

[Bug]: LDSR upscaler breaks prompt token counters #12942

@catboxanon

Description

@catboxanon

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Title. Although this doesn't cause inability to use the rest of the application, it does prevent token counters being used until the webui is restarted.

Steps to reproduce the problem

  1. Enter a prompt, note that token counters work correctly
  2. Generate an image with hires fix, with the LDSR upscaler
  3. After image is generated, modify the prompt, and observe the error thrown in the console and the UI

What should have happened?

The LDSR upscaler should not break the behavior of the prompt token counters.

Sysinfo

sysinfo.txt

What browsers do you use to access the UI ?

Mozilla Firefox

Console logs

File "Q:\AI\venv\a1111\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "Q:\AI\venv\a1111\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "Q:\AI\venv\a1111\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "Q:\AI\venv\a1111\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "Q:\AI\venv\a1111\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "Q:\AI\venv\a1111\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "Q:\AI\venv\a1111\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "Q:\AI\git\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "Q:\AI\git\stable-diffusion-webui\modules\ui.py", line 168, in update_token_counter
    token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
  File "Q:\AI\git\stable-diffusion-webui\modules\ui.py", line 168, in <listcomp>
    token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
  File "Q:\AI\git\stable-diffusion-webui\modules\sd_hijack.py", line 301, in get_prompt_lengths
    _, token_count = self.clip.process_texts([text])
  File "Q:\AI\venv\a1111\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Identity' object has no attribute 'process_texts'

Additional information

#9466 was similar to this issue but it was glossed over because OP was actually running with --ui-debug-mode. That is not the case here however.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugReport of a confirmed bug

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions