-
New Feature Request
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
Value cache is another place that could take advantage of read/write locks. Currently value retrieval process looks like:
- lock cache
- if item is not cached:
- add item to cache
- if not enough values are cached:
- unlock cache
- read values from database
- lock cache
- add values to cache
- copy values from cache to external buffer
- unlock cache
With read/write locks it would be something like:
- lock cache for reading
- if item is not cached:
- unlock cache
- lock cache for writing
- add item to cache
- if not enough values are cached :
- unlock cache
- read values from database
- lock cache for writing
- add values to cache
- copy values from cache to external buffer
- unlock cache
On server startup it would result in some overhead because for all items there would be extra value cache unlock/lock. But after caching all required values data retrieval from value cache would require only single read lock.
Additional read->write lock 'upgrades' would happen when item changes value type, but that's rare occurrence.
However the main problem is that each read function alsot update some item properties to track the cached values, produce statistics:
- daily request range - used to track for each item the time period that must be cached
- item hits/misses - used to decided which items can be dropped when value cache switches to low memory mode
- cache hits/misses - used to provide cache statistics to frontend
In theory we could use atomics to store those properties. However that's quite serious design decision that should be carefully investigated.