A Local LRU CacheFri 29 March 2019 by Moshe Zadka
"It is a truth universally acknowledged, that a shared state in possession of mutability, must be in want of a bug." -- with apologies to Jane Austen
As Ms. Austen, and Henrik Eichenhardt, taught us, shared mutable state is the root of all evil.
Yet, the official documentation of functools tells us to write code like:
@lru_cache(maxsize=32) def get_pep(num): 'Retrieve text of a Python Enhancement Proposal' resource = 'http://www.python.org/dev/peps/pep-%04d/' % num try: with urllib.request.urlopen(resource) as s: return s.read() except urllib.error.HTTPError: return 'Not Found'
(This code is copied from the official documentation, verbatim.)
module-global mutable state.
It doesn't get any more shared,
every import of the module will share the object!
We try and pretend like there is no
the cache is
very quickly things start falling apart:
why would the documentation even tell us how to get back the original function
if the cache is so benign?
decorating the function with
is anything but benign!
because it is shared-thread mutable state,
we have introduced some thread locking,
with all the resulting complexity,
and occasional surprising performance issues.
Another example of non-benign-ness is that,
sometimes a transient error,
such as a
will linger on,
making all subsequent requests "fail",
until a cache eviction
(because an unrelated code path went through several PEPs)
causes a retry.
These are exactly the kind of bugs which lead to warnings
against shared mutable state!
If we want to cache, let us own it explicitly in the using code, and not have a global implementation dictate it. Fortunately, there is a way to properly use the LRU cache.
First, remove the decorator from the implementation:
def get_pep(num): 'Retrieve text of a Python Enhancement Proposal' # Same code as an in official example
Then, in the using code, build a cache:
def analyze_peps(): cached_get_pep = lru_cache(maxsize=32)(get_pep) all_peps, pep_by_type = analyze_index(cached_get_pep(0)) words1 = get_words_in_peps(cached_get_pep, all_peps) words2 = get_words_in_informational(cached_get_pep, pep_by_type["Informational"]) do_something(words1, words2)
Notice that in this example, the lifetime of the cache is relatively clear: we create it in the beginning of the function, passed it to called functions, and then it goes out of scope and is deleted. (Barring one of those functions sneakily keeping a reference, which would be a bad implementation, and visible when reviewing it.)
This means we do not have to worry about cached failures if the function
If we retry
we know that it will retry retrieving any PEPs,
even if those failed before.
If we wanted the cache to persist between invocations of the function, the right solution would be to move it one level up:
def analyze_peps(cached_get_peps): # ...
Then it is the caller's responsibility to maintain the cache: once again, we avoid shared mutable state by making the state management be explicit.
In this example, based on the official lru_cache documentation, we used a network-based function to show some of the issues with a global cache. Often, lru_cache is used for performance reasons. However, even there, it is easy to create issues: for example, one function using non-common inputs to the LRU-cached functions can cause massive cache evictions, with surprising performance impacts!
lru_cache implementation is great:
but using it as a decorator means making the cache global,
with all the bad effects.
Using it locally is a good use of a great implementation.
(Thanks to Adi Stav, Steve Holden, and James Abel for their feedback on early drafts. Any issues that remain are my responsibility.)