Skip to content

Latest commit

 

History

History
31 lines (22 loc) · 1.09 KB

File metadata and controls

31 lines (22 loc) · 1.09 KB

Example Workflows

This directory contains example workflows demonstrating the use of ComfyUI-MultiModal-Prompt-Nodes.

Basic Examples

Vision LLM Node

  1. Text-only prompt enhancement: Load a local GGUF model and enhance a simple text prompt
  2. Single image + prompt: Provide an image and text for vision-language processing
  3. Multi-image prompt: Use up to 3 images with a single prompt

Qwen Image Edit Prompt Generator

  1. Image editing: Generate prompts for image editing tasks
  2. Multi-image context: Use multiple images to provide context
  3. Local vs API: Compare local GGUF models with cloud API

Wan Video Prompt Generator

  1. Text-to-Video: Generate video prompts from text descriptions
  2. Image-to-Video: Create video prompts from a starting frame

Creating Workflows

To create your own workflows:

  1. Add the desired node from the multimodal/prompt category
  2. Configure model paths and parameters
  3. Connect to your generation nodes
  4. Save and share your workflow JSON

Contributing

Feel free to submit your own example workflows via pull requests!