Caching
We show the most common use case of caching during the tutorial. In rare cases, this may not be enough. If you find yourself needing to change the default behavior, read on.
Configuration
The default settings.py file from which your inherits has the following CACHE
section in it:
CACHE = {
'PREFIX': ABA,
'HOST': os.environ.get('CACHE_HOST', '10.115.4.18'),
'PORT': int(os.environ.get('CACHE_PORT', 11211)),
'CONNECT_TIMEOUT': 1,
'TIMEOUT': 1,
'COMPRESS_DATA': True,
'ENCRYPTION_KEY': None,
}
The CACHE_HOST and CACHE_PORT environment variables are set for you, but you are
free to override them by setting your own environment variable:
$ export CACHE_HOST=localhost
To override one of the other configurations, simply override the CACHE key with a new value
in your settings.py file:
CACHE['COMPRESS_DATA'] = False
CACHE['ENCRYPTION_KEY'] = 'supersecretpassword'
Note
Setting CACHE[‘ENCRYPTION_KEY’] to anything other than None or ‘’ will require cryptography
to be installed (q2 add_dependency cryptography)
Cache Levels
Based on your handler type, we choose an appropriate cache level when you call simply self.cache, but it is possible to choose
specifically as well.
You can also create a Q2CacheClient object using self.get_cache() which has a prefix argument that lets you set the prefix as you wish.
Stack
self.stack_cache will prepend the unique stack ID (or customer_key) to the key. This is the default for most extension types.
Session
self.session_cache that prepends session_id to the keys. self.session_cache.get(key)
returns the value that is set using self.session_cache.set(key). This is a convenient shorthand when you want a value used throughout your extension
but that might differ per user.
This is never the default
Service
self.service_cache will scope your cache reads to the service, which is useful if you want to cache something across multiple institutions. This can be
dangerous if misused, so it is only the default in cases where there is no stack passed in. (aka BaseHandler)
Decorator
There is also a @cache decorator to handle caching. This decorator will return the cached value, if present.
If a cached value is not present the function return will be set as the cache value for future use:
@cache(timeout=600)
async def func_to_be_cached(self):
...
This is more personal preference than performance based, but is certainly easy to read!
Custom Serializer/Deserializer
The default serializer/deserializer provided by the SDK will automatically convert your data to and from common data types
like strings, integers, lists and dictionaries. This works well as the default but if you have more specific datatypes like
a dataclass that you would like to get/set directly into the cache client, therby skipping an unnecessary conversion to/from
dictionaries, you can provide your own serializer and/or deserializer function(s). The Q2CacheClient provides a context manager
method, self.cache.change_serde(), that will allow you to temporarily set either the serializer or deserializer functions:
@dataclass
class MyData:
key: str
val: int
def my_serializer(cache_key: str, value: MyData) -> tuple[str, int]:
# return the value as a string and an int that will be passed to the deserializer
# to tell it what type the value should become
# the string can represent your data however you would like
return f"{value.key}:{value.val}", 1
def my_deserializer(cache_key: str, value: bytes, flag: int) -> MyData
# pymemcache will provide the cached value as bytes
# the cache_key/flag values may or may not be useful to you depending on if you
# plan to use the same serializer/deserializer functions for multiple types
values = value.decode("utf-8").split(":")
return MyData(key=values[0], val=int(values[1]))
class MyHandler(...):
...
def default(self):
with self.cache.change_serde(
serializer=my_serializer, deserializer=my_deserializer
) as cache:
# use the new cache object from here pretty much as normal
my_data = MyData(key="a_key", val=42)
cache.set("my_cache_key", my_data)
...
cached_data = cache.get("my_cache_key")
assert cached_data == my_data
# or create a new Q2CacheClient of your own for longer term use
my_cache_client = self.get_cache(
serializer=my_serializer, deserializer=my_deserializer, ...
)
This can be particularly useful if you have large amounts of data being stored in the cache as you can switch out the default Python json library with a much faster one like msgspec or orjson. You could also cache Pydantic data models directly and retieve them much faster!
Limitations
There is a maximum size limit on the value that can be stored in memcached (2Mb). If you find the need to store something larger than this, we suggest reaching for ArdentFS instead