Running fizzy on docker uses a lot of memory #2350
Replies: 6 comments 13 replies
-
|
Same problem here. I've set a max on the container. |
Beta Was this translation helpful? Give feedback.
-
|
Hey @locilocisu can you share how much memory your container is using? |
Beta Was this translation helpful? Give feedback.
-
|
That's because of Solid Queue. If you are running it for personal use and is ok with disabling it, set 'SOLID_QUEUE_IN_PUMA: "false"' in your docker compose file, or tweak Solid Queue accordingly for your needs... #2343 |
Beta Was this translation helpful? Give feedback.
-
|
After some investigation, it seems that SolidQueue 1.3 with async mode has a reduced memory usage. See rails/solid_queue#330 (comment). @jzimdars , could you please update Fizzy SolidQueue version ? over 1 Gb RAM use for a self-hosted single user kanban is over the top. Thank you for sharing your tool though. |
Beta Was this translation helpful? Give feedback.
-
|
I saw as high as 4GiB in use after running for a long time. Two users and <100 cards. CPU was nontrivial as well. Fizzy does seem to handle capping the memory just fine, which is good. I was able to bypass this issue more generally by dynamically spinning down fizzy when not in use and starting up the container when a request comes in (using traefik + sablier). The startup time is pretty quick for Rails--I get a responsive page in 6 or 7 seconds after a cold start. |
Beta Was this translation helpful? Give feedback.
-
|
For folks seeing high memory consumption, can I ask how many web and job workers you have running in your container? The default configuration will scale these according to the physical core count of the host. So if you are running on a machine with quite a few cores you might find it's running more workers than you actually need, and as a result, the memory load is high. Each process will end up settling on some baseline memory consumption, and across a lot of processes it can quickly add up. I don't think this is leakage people are seeing; but more likely that the steady state requires a lot of memory. For comparison, I have a Fizzy container that's been running on a small Digital Ocean droplet for 2 weeks, and it's steady at about 900MB. But that's a single core VPS. If I launch Fizzy on my 32-core desktop machine it quickly reaches around 3GB -- and is also running far more workers than I really need. If you find this to be the issue you can dial down the workers by setting It's hard to have a default configuration that suits every use case. What we have now is more suited to the case where you have plenty of available memory and want to max out Fizzy's capacity on the host. But in the (common) case that you're running Fizzy alongside other workloads it can be too memory-hungry. It's possible something else is afoot here too! But I'd start with trying to reduce the number of web and job processes and see if that helps. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Is anyone else seeing the same behavior where my fizzy docker container would just hold on to the memory it was allocated? It is only when I restart the container then the memory usage is reset.
Beta Was this translation helpful? Give feedback.
All reactions