Skip to content

Commit df913dd

Browse files
committed
Update docs
1 parent f8cf702 commit df913dd

2 files changed

Lines changed: 20 additions & 8 deletions

File tree

README.adoc

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -117,19 +117,24 @@ Test results on Arc B580 (12G) are as follows:
117117
|Lumina 2
118118
|🆗 Good
119119

120+
|FLUX.1 Krea
121+
|🆗 Good
122+
120123
|FLUX.2 [Klein] 9B
121124
|🆗 Good
122125

123126
|Z-Image Turbo
124-
|🆗 Good
127+
|🆗 Slow on first run. Image oversize may cause freeze.
125128

126129
|Qwen Image 2512
127-
|🆗 Good
130+
|❌ Fail to load
128131

129132
|===
130133

131134
Current known issues:
132135

136+
. Compatibility and performance regression. Using the current version of Intel GPU drivers (101.8626) with PyTorch (2.11.0 RC) results in worse text-to-image performance compared to the previous version. Consider using an older version or use the latest development version (PyTorch 2.12 nightly).
137+
133138
. Once VRAM overflows, the program crashes or freezes and needs to be restarted. Adding the `--disable-smart-memory` parameter by default alleviates this issue.
134139

135140
** Using "Disable Smart Memory Management" will increase model loading time. If you only use a single model after launching, you can disable this option to save time.
@@ -138,8 +143,9 @@ Current known issues:
138143

139144
** Closing GPU-accelerated programs (e.g., browsers) can free up some VRAM. After closing the browser, the program continues to run, and you can check the generation progress in the log window.
140145

141-
. In current version (PyTorch 2.10.0), the XPU performance on Windows is inferior than that on Linux.
142-
It's using more VRAM and is compatible with less models, although the inference speed is similar. WSL2 has not been tested.
146+
. For ComfyUI text-to-image generation specifically, XPU performance on Windows is comprehensively inferior to Linux, and this has not improved significantly over several versions.
147+
Conversely, on Linux, each version shows visible improvements, and the performance gap continues to widen.
148+
If you plan to use ComfyUI heavily on Intel GPUs, consider using a Linux system (Fedora is recommended first, followed by the latest Ubuntu).
143149

144150
** https://github.com/YanWenKun/ComfyUI-Docker/tree/main/xpu[Docker image for XPU]
145151

README.zh.adoc

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -116,19 +116,24 @@ image::docs/screenshot-launcher.zh.webp["启动器截图"]
116116
|Lumina 2
117117
|🆗 正常
118118

119+
|FLUX.1 Krea
120+
|🆗 正常
121+
119122
|FLUX.2 [Klein] 9B
120123
|🆗 正常
121124

122125
|Z-Image Turbo
123-
|🆗 正常
126+
|🆗 首次运行缓慢;图像尺寸过大易造成假死
124127

125128
|Qwen Image 2512
126-
|🆗 正常
129+
|❌ 无法加载
127130

128131
|===
129132

130133
目前已知问题:
131134

135+
. 兼容性与性能倒退。使用 PyTorch 2.11.0 RC 与英特尔显卡驱动 101.8626 ,文生图表现不如上一版。可以考虑使用旧版,或直接使用最新的开发版(PyTorch 2.12 nightly)。
136+
132137
. 一旦显存溢出,程序即崩溃或假死,需要重新启动。默认添加参数 `--disable-smart-memory` 后有所缓解。
133138

134139
** 勾选“禁用智能内存管理”会使模型运行完即释放显存,不会一直缓存于显存中,但该设置会增加模型加载用时。如果本次程序启动后只使用单一模型,也可取消勾选。
@@ -137,8 +142,9 @@ image::docs/screenshot-launcher.zh.webp["启动器截图"]
137142

138143
** 关闭使用 GPU 加速的程序(比如浏览器)可释放一部分显存。关闭浏览器后程序仍然在运行,可在日志窗口查看生成进度。
139144

140-
. 当前版本(PyTorch 2.10.0)下,XPU 在 Windows 的表现不如 Linux。
141-
在 Windows 上显存占用更高,兼容的模型更少,不过推理速度相近。WSL2 未测试。
145+
. 仅就 ComfyUI 文生图而言,XPU 在 Windows 下的性能表现全面不如 Linux,且历经数个版本仍未有明显改善。
146+
相反, 在 Linux 下每个版本都有可见提升,差距越拉越大。
147+
如打算在 Intel GPU 上重度使用 ComfyUI,建议考虑使用 Linux 系统(优先推荐 Fedora,其次推荐最新版 Ubuntu)。
142148

143149
** https://github.com/YanWenKun/ComfyUI-Docker/tree/main/xpu-cn[用于 XPU 的 ComfyUI Docker 镜像]
144150

0 commit comments

Comments
 (0)