Caching
We show the most common use case of caching during the tutorial. In rare cases, this may not be enough. If you find yourself needing to change the default behavior, read on.
Configuration
The default settings.py file from which your inherits has the following CACHE
section in it:
CACHE = {
'PREFIX': ABA,
'HOST': os.environ.get('CACHE_HOST', '10.115.4.18'),
'PORT': int(os.environ.get('CACHE_PORT', 11211)),
'CONNECT_TIMEOUT': 1,
'TIMEOUT': 1,
'COMPRESS_DATA': True,
'ENCRYPTION_KEY': None,
}
The CACHE_HOST
and CACHE_PORT
environment variables are set for you, but you are
free to override them by setting your own environment variable:
$ export CACHE_HOST=localhost
To override one of the other configurations, simply override the CACHE
key with a new value
in your settings.py file:
CACHE['COMPRESS_DATA'] = False
CACHE['ENCRYPTION_KEY'] = 'supersecretpassword'
Note
Setting CACHE[‘ENCRYPTION_KEY’] to anything other than None or ‘’ will require cryptography
to be installed (q2 add_dependency cryptography
)
Cache Levels
Based on your handler type, we choose an appropriate cache level when you call simply self.cache
, but it is possible to choose
specifically as well.
You can also create a Q2CacheClient
object using self.get_cache()
which has a prefix argument that lets you set the prefix as you wish.
Stack
self.stack_cache
will prepend the unique stack ID (or customer_key
) to the key. This is the default for most extension types.
Session
self.session_cache
that prepends session_id to the keys. self.session_cache.get(key)
returns the value that is set using self.session_cache.set(key)
. This is a convenient shorthand when you want a value used throughout your extension
but that might differ per user.
This is never the default
Service
self.service_cache
will scope your cache reads to the service, which is useful if you want to cache something across multiple institutions. This can be
dangerous if misused, so it is only the default in cases where there is no stack passed in. (aka BaseHandler)
Decorator
There is also a @cache
decorator to handle caching. This decorator will return the cached value, if present.
If a cached value is not present the function return will be set as the cache value for future use:
@cache(timeout=600)
async def func_to_be_cached(self):
...
This is more personal preference than performance based, but is certainly easy to read!
Limitations
There is a maximum size limit on the value that can be stored in memcached (2Mb). If you find the need to store something larger than this, we suggest reaching for ArdentFS instead