MultiRefCache has a potential race condition in it and is incompatible with JSR-107 and should be removed to allow Sakai to support distributed caching.
The multiref cache is meant to be a way to automatically handle the case where there is a cache full of entries which are linked to another single cache entry that contains a set of cache keys. When one of the single entries expires then it should also expire the linked set or vice versa. The way this is implemented, the set of keys are stored in the cache object on each server so it is not cluster safe and should no longer be used. I have deprecated it to make this clear.
It still works the way it used to but it should simply be rewritten to handle this special case using 2 caches.
- MultiRefCache (MRC) stores a map in the cache object (but not in the cache itself) which is really just a set (keys and values are always the same)
- MRC stores a MultiRefCacheEntry (MRCE) as the value in the cache
- MRCE contains a list of refs (CopyOnWriteArrayList) along with the boolean payload, this list is really a set but the check for existing values is handled manually in the code
Used in BaseAliasService.java and SakaiSecurity.java
Aaron Z stats from a fairly high load server:
org.sakaiproject.authz.api.SecurityService.cache: count:50000 hits:23557473 misses:1251242 hit%:94
Sam O stats from a 40k+ American higher-ed institution:
org.sakaiproject.authz.api.SecurityService.cache: count:10000 hits:11052126 misses:745242 hit%:93
No stats available for AliasService.targetCache since it was added after 2.9 release
Security Cache profiling:
1 user logging in and accessing 10 tools in 2 sites produces - entries(Ehcache[key => payload],296 + 1485 = 1781) and refs(Map[key => value],70 + 1485 = 1555)
The invalidate entries are about 10x larger than the data actually being cached and they are all stored twice.
1) A change to site.helper or user.template will basically clear out the entire cache but does it via massive manual checks leaving roughly 3 entries (when testing with 1 admin and 1 student user), this is likely working as intended but is very inefficient because each purge requires map iteration
2) Accessing roster as a student adds 138 new cached unlock checks for a site with 4 users (1 is super admin) - this typically doubles the number of cached perm checks (compared to accessing all other core tools)
3) Invalidation maps contain /site/6d7fe313-ae3b-49be-a75c-048f5de75f61 and /realm//site/6d7fe313-ae3b-49be-a75c-048f5de75f61, they should only contain the second one (since the first is not used in the unlock cache keys ever)
4) All realms contain a // after realm - this is consistent but seems wrong
New security caching solution:
if (user.template, site.helper, etc. change) then clear entire security cache
else if the perms in a site changes we loop through all possible site users and the changed permissions and remove all those entries from the cache (including the entry for the anon user - e.g. unlock@@...)
else if the perms for a user change, same as site perms but all the user sites and the changed permissions
else if a user is added/removed from super user status then update the cache entry (easiest to simply make sure we update the cache when this happens rather than invalidating)
Cache keys are: unlock@
This strategy eliminates the need to store the invalidation keys and is much simpler to code
There is a very good chance many of those would not be in the cache but that should not cause a problem (however if it proves to be problematic we could do key checks to cut those down, but I don't think that is actually more efficient)
Getting all possible perms is cheap, that's in memory already
Get all siteids for a user or all userids for a site might be a little more costly, but the idea is that this is a rare case