This directory contains example workflows demonstrating the use of ComfyUI-MultiModal-Prompt-Nodes.
- Text-only prompt enhancement: Load a local GGUF model and enhance a simple text prompt
- Single image + prompt: Provide an image and text for vision-language processing
- Multi-image prompt: Use up to 3 images with a single prompt
- Image editing: Generate prompts for image editing tasks
- Multi-image context: Use multiple images to provide context
- Local vs API: Compare local GGUF models with cloud API
- Text-to-Video: Generate video prompts from text descriptions
- Image-to-Video: Create video prompts from a starting frame
To create your own workflows:
- Add the desired node from the
multimodal/promptcategory - Configure model paths and parameters
- Connect to your generation nodes
- Save and share your workflow JSON
Feel free to submit your own example workflows via pull requests!