-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fragment cache reservation should be manageable #1327
Comments
This request doesn't make much sense imo. Our tests should test whatever the customer can/should expect and in this case, the customer should not be able to make use of a fragmentcache smaller than 1GiB. Going by the fact that QA only uses 10GB for writecache, implies they don't care about speed too much, so they might as well use a SATA drive for the WRITE role, making sure they always have at least 10GiB of space. |
@kvanhijf for testing purposes this would be nice to make it smaller than 1GB, because we are limited in our space. For customer purpose this would be good because the customer can then choose if the fragment cache needs to be bigger then 1GB e.g. with a 10G global write buffer |
The value should be a percentage and should also allow to do this for a fragment cache across an ssd backend. |
When adding the same calculation for an SSD backend, we would have to dynamically scale the fragment cache size setting, meaning having to restart the ALBA proxies. So every time an additional ASD is claimed / removed we would have to do this. Is this what we want to do? |
@domsj how hard would it be to make the fragment cache settings dynamically reloadable in the proxie? |
@wimpers you're talking about changing the size of a local fragment cache, right? It's certainly possible, and shouldn't be too hard. We already have a method to evict items from the local fragment cache ... so it's just a matter of calling this method with the right arguments when this part of the config changes. Regarding @kvanhijf's comment: there seems to have been some confusion... claiming/removing asds could change the capacity of a backend, but shouldn't result in a different size allocation for local fragment caches. And for fragment caches of type alba - aka when setting up an accelerated alba - there's no specifying of how much of the backend is allowed to be used for caching. |
@domsj So there is there a difference between local fragment cache and a distributed one code wise (taht is what I read in your last comment that you can't set a size on the fragment cache in case it is distributed) Since all our setups are now using a distribute cache, we were only thinking in that direction. The idea was to set a limitation per vPool on the amount of space that a vPool can use in the distributed fragment cache. Only QA uses the local fragment cache in case they don't have enough disks and since it is easy to setup. |
Yes there's a big difference code wise between a local fragment cache vs one of type 'alba' (aka a distributed one).
|
Create openvstorage/alba#564 for that as FR |
Maybe per vDisk? |
In G milestone we added cache quota for ALBA backends. This issue is still relevant in case a local fragment cache is selected. |
Currently when you create a vPool, 10% of the global write buffer is reserved for the fragment cache. Instead of giving the fragment cache 10%, this should be manageable from the GUI/API.
E.g. Reserved amount of global write buffer for fragment cache: 100MB
This request is originating from the fact that our CI environments don't have the min. limit to support the fragment cache. Currently the min. is 1 GB so the global write buffer needs to be at least 10G.
The text was updated successfully, but these errors were encountered: