Cache

enum q2_sdk.core.cache.CacheReturnCode(value)[source]

Bases: IntEnum

Member Type:

int

Valid values are as follows:

String = <CacheReturnCode.String: 1>
Json = <CacheReturnCode.Json: 2>
Integer = <CacheReturnCode.Integer: 3>
Encrypted = <CacheReturnCode.Encrypted: 4>
exception q2_sdk.core.cache.CacheConfigError[source]

Bases: Exception

Bad configs to instantiate Q2CacheClient

q2_sdk.core.cache.deserialize_json(value, flags, compress_data, encryption_key)[source]
Parameters:
  • value – Raw value to deserialize

  • flags (CacheReturnCode) – 1=str, 2=json, 3=int, 4=encrypted

  • compress_data (bool) – is_zlib compressed

  • encryption_key (Optional[bytes]) – If present, will decrypt with security module

class q2_sdk.core.cache.RecentKeysStack(max_size, seed_from_file=False)[source]

Bases: ForkedUniqueStack

Configure RecentKeys to work with forked mode

append(item)[source]

Append object to the end of the list.

remove_safe(item)[source]

Same as .remove but will not except if it doesn’t exist

class q2_sdk.core.cache.RecentKey(key, prefix='', is_encrypted=False, expire_time=None)[source]

Bases: object

RecentKey(key: ‘str’, prefix: ‘str’ = ‘’, is_encrypted: ‘bool’ = False, expire_time: ‘Optional[datetime]’ = None)

class q2_sdk.core.cache.Q2CacheClient(*args, logger=None, prefix=None, local_path=None, encryption_key=None, serializer=None, deserializer=None, **kwargs)[source]

Bases: PooledClient, Generic

Same interface as pymemcache base class, but remembers the keys being requested, which makes it easy to bust the cache

Parent docs: https://pymemcache.readthedocs.io/en/latest/apidoc/pymemcache.client.base.html#module-pymemcache.client.base

get(key, default=None, **kwargs)[source]
Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • default – Returned if the key was not found

Return type:

T

get_many(keys)[source]
Parameters:

keys (list[str]) – Will prepend CACHE[‘PREFIX’] from settings file to all

Return type:

dict[str, T]

set(key, value, expire=0, noreply=None, skip_integrity_check=False)[source]
Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • value (T) – Automatically compressed

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

  • skip_integrity_check – If True, will skip checking integrity. Useful if cached data is large

Return type:

bool | None

set_and_return(key, value, expire=0, noreply=None)[source]

Simple wrapper to set that returns the value and raises a MemcacheError if the result is not True

Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • value (T) – Automatically compressed

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

T

set_many(values, expire=0, noreply=None)[source]

A convenience function for setting multiple values.

Parameters:
  • values (dict[str, T]) – {key: value, key2: value2}

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

list[str | bytes] | None

delete_many(keys, noreply=None)[source]

A convenience function to delete multiple keys.

Parameters:
  • keys (list[str]) – Will prepend CACHE[‘PREFIX’] from settings file

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

bool

flush_all(delay=0, noreply=None)[source]

A convenience function to clear all cached keys.

Parameters:
  • delay – optional int, the number of seconds to wait before flushing, or zero to flush immediately (the default).

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

bool

async get_async(key, default=None, **kwargs)[source]

Same as get but will run in a separate thread

Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • default – Returned if the key was not found

Return type:

T

async get_many_async(keys)[source]

Same as get_many but will run in a separate thread

Parameters:

keys (list[str]) – Will prepend CACHE[‘PREFIX’] from settings file to all

Return type:

dict[str, T]

async set_async(key, value, expire=0, noreply=None)[source]

Same as set but will run in a separate thread

Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • value (T) – Automatically compressed

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

bool | None

async set_and_return_async(key, value, expire=0, noreply=None)[source]

Simple wrapper to set that returns the value and raises a MemcacheError if the result is not True

Parameters:
  • key (str) – Will prepend CACHE[‘PREFIX’] from settings file

  • value (T) – Automatically compressed

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

T

async set_many_async(values, expire=0, noreply=None)[source]

Same as set_many but will run in a separate thread

Parameters:
  • values (dict[str, T]) – {key: value, key2: value2}

  • expire – In seconds. 0 for no expiry (the default)

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

list[str | bytes] | None

async delete_many_async(keys, noreply=None)[source]

Same as delete_many but will run in a separate thread

Parameters:
  • keys (list[str]) – Will prepend CACHE[‘PREFIX’] from settings file

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

bool

async flush_all_async(delay=0, noreply=None)[source]

Same as flush_all but will run in a separate thread

Parameters:
  • delay – optional int, the number of seconds to wait before flushing, or zero to flush immediately (the default).

  • noreply – If False, will wait for Memcached to respond. Typically, you will want to leave this alone.

Return type:

bool

enum q2_sdk.core.cache.LocalCacheAction(value)[source]

Bases: StrEnum

Member Type:

str

Valid values are as follows:

READ = <LocalCacheAction.READ: 'read'>
SET = <LocalCacheAction.SET: 'set'>
CLEAR = <LocalCacheAction.CLEAR: 'clear'>
UPDATE = <LocalCacheAction.UPDATE: 'update'>
q2_sdk.core.cache.get_cache(logger=None, prefix=None, cachemock_params=None, encryption_key=None, serializer=None, deserializer=None, **kwargs)[source]
Parameters:
  • prefix – If defined will be prepended to all keys

  • logger – Reference to calling request’s logger (self.logger in your extension)

  • encryption_key – Encryption key

  • serializer (Optional[GenericAlias[TypeVar(T)]]) – Function to serialize value

  • deserializer (Optional[GenericAlias[TypeVar(T)]]) – Function to deserialize value

Return type:

Q2CacheClient[TypeVar(T)]

enum q2_sdk.core.cache.StorageLevel(value)[source]

Bases: StrEnum

At what scope should the cache be stored?

Parameters:
  • Service – Cache will be scoped to the service

  • Stack – Cache will be scoped to the customer stack (self.hq_credentials.customer_key)

  • Session – Cache will be scoped to the user session (self.online_session.session_id)

Member Type:

str

Valid values are as follows:

Service = <StorageLevel.Service: 'service'>
Stack = <StorageLevel.Stack: 'stack'>
Session = <StorageLevel.Session: 'session'>
q2_sdk.core.cache.cache_key(fn, *args, **kwargs)[source]

Simple function that returns a unique key based on function name, and arguments

q2_sdk.core.cache.cache(_func=None, *, timeout=300, key=<function cache_key>, key_prefix=None, storage_level=StorageLevel.Stack, **kw)[source]

Decorator to handle caching elegantly.

Usage: @cache(timeout=300, key=’cacheKey’, key_prefix=’keyPrefix’, storage_level=StorageLevel.Stack) def func(foo, bar): …

If the key is present, it returns the cached value instead of running the function. If the key is not yet cached, it executes the function and stores the value under the key.

By default, the cache key is the qualified name of the function for the active stack (customer environment). You can override the function name portion with the “key” parameter, and the stack portion with the “storage_level” parameter. The key is set using the “Q2CacheClient.set()” method, and you can add any additional prefix according to the client’s configs. You can also add a decorator-specific prefix using the “key_prefix” parameter.

The “timeout” parameter specifies the duration, in seconds, that the function is cached. It defaults to 300 seconds (5 minutes).

Parameters:
  • timeout (int) – Duration, in seconds, the function is cached. This argument defaults to 300 seconds (5 minutes)

  • key (Union[Callable, str, None]) – Optional cache key that may be set, the default is the qualified name of the function.

  • key_prefix (Optional[str]) – A string that will be automatically included (prepended by default) to all cache keys

  • storage_level (StorageLevel) – StorageLevel enum, Service: Scoped to this service, Stack: Scoped to the customer env,

Session: Scoped to the user session. If no level is passed the function’s self.default_storage_level will be used. Finally if that isn’t available the ‘Stack` level will be used.