[Koha-devel] A three-level cache?

Jesse pianohacker at gmail.com
Fri Mar 11 19:57:07 CET 2016


I've done some testing of this on a big, complex search results page with
data from one of our largest partners, and have seen some immediate
benefits.

One optimization that seems to make a big difference for Plack, though, is
less-frequent clearing of the L1 cache. I'll be attaching a POC later
today, but the basic idea is to store an modification time in memcached
whenever a cache value is set and check that time before returning a value
from the L1 cache. If memcache has been modified more recently than the L1
cache, the L1 is cleared. This means that each Plack worker process can
have its own persistent L1 cache but clear it only when it becomes stale.

Some timing numbers to support this (the test page is a search results page
with 50 results and XSLT):

Before any caching at all, the page took 30 seconds (!) to load.
With memcached sysprefs: ~9 seconds
With an L1 cache cleared before every request: ~8.5 seconds
With an L1 cache cleared only when memcache is modified: 7 seconds

I expected the roundtrip to check the memcache modification time on every
cache lookup to cancel out the gain of the L1, but as seen above, this is
clearly not true. Any ideas about a better implementation are of course
welcomed.



2016-03-11 11:33 GMT-07:00 Tomas Cohen Arazi <tomascohen at gmail.com>:

> Hi
>
> 2016-03-11 10:38 GMT-03:00 Jonathan Druart <
> jonathan.druart at bugs.koha-community.org>:
> >
> > Hi devs,
> >
> > I have worked on a three-level cache to cache our stuffs:
> > L1 - hashref
> > L2 - memcache
> > L3 - DB
> >
> > Please have a look at bug 11998 and bug 16044.
> > I really really would like to get your opinions on these patches.
>
> I really like the approach because it provides a specilized way of dealing
> with caching for each use case. Thus, no penalty because of the removal of
> in-memory cache (L1), and we can have proper invalidation for heavier stuff
> on memcache (L2) without the need to have a single way to cache stuff.
>
> I signed 11988 after reviewing and properly testing it. The only issue I
> found was that it was sysprefs-specific, and then I found 16044 put the
> code were it belongs, and I can say we can find some bug on it, but this is
> a huge improvement in terms of speed and also maintainability.
>
> Congrats Robin and Jonathan
>
> --
> Tomás Cohen Arazi
> Theke Solutions (http://theke.io)
> ✆ +54 9351 3513384
> GPG: B2F3C15F
>
>
> _______________________________________________
> Koha-devel mailing list
> Koha-devel at lists.koha-community.org
> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel
> website : http://www.koha-community.org/
> git : http://git.koha-community.org/
> bugs : http://bugs.koha-community.org/
>



-- 
Jesse Weaver
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.koha-community.org/pipermail/koha-devel/attachments/20160311/a06841cb/attachment-0001.html>


More information about the Koha-devel mailing list