Replies: 1 comment
-
|
The Docling UI doesn't currently display which model is being used for a given operation. The model selection is determined by the pipeline type you choose (Standard, Vlm, Asr) and sometimes by advanced options or toggles. For example, the "Vlm" pipeline typically uses a vision-language model like SmolDocling or GraniteDocling, while "Standard" may use a different model. The backend for PDF processing (like dlparse_v1/v2 or pypdfium2) is separate and doesn't affect which LLM is used for conversion. If you want to ensure the newest Granite model is used, you generally need to select the appropriate pipeline (often "Vlm") and make sure the Granite model is available in your model cache. In the CLI and Python API, you can explicitly specify the model (e.g., If you need to guarantee a specific model is used for conversions, consider using the CLI or Python API where you can set the model directly. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Great tool! I Love the UI and the docs page. API is incredibly useful.
But if I am using the UI for testing, how does one know what model is being used?
I initially downloaded the models via:
And this seemed to download several models successfully...
However, I noticed that the new IBM Granite model was not in the folder once that was finished. So I got a list of the models with the download tool:
so I did the same thing:
And it successfully download the model to the correct place. This is now added, but I am unsure of how to tell what model is being used when a document is loaded, processed, converted... etc... For example, when I upload a PDF, and convert it to markdown, is the newest granite model doing this? Or the VLM model that was there before?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions