问题:是否有一个装饰器来简单地缓存函数的返回值?
考虑以下:
@property
def name(self):
if not hasattr(self, '_name'):
# expensive calculation
self._name = 1 + 1
return self._name
我是新手,但我认为可以将缓存分解为装饰器。只有我找不到喜欢的人;)
PS实际计算不取决于可变值
Consider the following:
@property
def name(self):
if not hasattr(self, '_name'):
# expensive calculation
self._name = 1 + 1
return self._name
I’m new, but I think the caching could be factored out into a decorator. Only I didn’t find one like it ;)
PS the real calculation doesn’t depend on mutable values
回答 0
从Python 3.2开始,有一个内置的装饰器:
@functools.lru_cache(maxsize=100, typed=False)
装饰器用备注可调用函数包装一个函数,该函数可保存最多最近调用的最大大小。当使用相同的参数定期调用昂贵的或I / O绑定的函数时,可以节省时间。
用于计算斐波纳契数的LRU缓存示例:
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]
>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)
如果您对Python 2.x感到困惑,那么这里是其他兼容的备注库的列表:
Starting from Python 3.2 there is a built-in decorator:
@functools.lru_cache(maxsize=100, typed=False)
Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls. It can save time when an expensive or I/O bound function is periodically called with the same arguments.
Example of an LRU cache for computing Fibonacci numbers:
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
>>> print([fib(n) for n in range(16)])
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]
>>> print(fib.cache_info())
CacheInfo(hits=28, misses=16, maxsize=None, currsize=16)
If you are stuck with Python 2.x, here’s a list of other compatible memoization libraries:
回答 1
听起来您好像并没有要求通用的备忘录装饰器(即,您对要为不同的参数值缓存返回值的一般情况不感兴趣)。也就是说,您想要这样:
x = obj.name # expensive
y = obj.name # cheap
而通用的备忘装饰器会为您提供:
x = obj.name() # expensive
y = obj.name() # cheap
我认为方法调用语法是更好的样式,因为它暗示了可能进行昂贵的计算,而属性语法则建议进行快速查找。
[更新:我以前链接并在此处引用的基于类的备忘录装饰器不适用于方法。我已经用装饰器函数代替了它。]如果您愿意使用通用的备忘录装饰器,这是一个简单的例子:
def memoize(function):
memo = {}
def wrapper(*args):
if args in memo:
return memo[args]
else:
rv = function(*args)
memo[args] = rv
return rv
return wrapper
用法示例:
@memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
在这里可以找到另一个限制缓存大小的备忘装饰器。
It sounds like you’re not asking for a general-purpose memoization decorator (i.e., you’re not interested in the general case where you want to cache return values for different argument values). That is, you’d like to have this:
x = obj.name # expensive
y = obj.name # cheap
while a general-purpose memoization decorator would give you this:
x = obj.name() # expensive
y = obj.name() # cheap
I submit that the method-call syntax is better style, because it suggests the possibility of expensive computation while the property syntax suggests a quick lookup.
[Update: The class-based memoization decorator I had linked to and quoted here previously doesn’t work for methods. I’ve replaced it with a decorator function.] If you’re willing to use a general-purpose memoization decorator, here’s a simple one:
def memoize(function):
memo = {}
def wrapper(*args):
if args in memo:
return memo[args]
else:
rv = function(*args)
memo[args] = rv
return rv
return wrapper
Example usage:
@memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
Another memoization decorator with a limit on the cache size can be found here.
回答 2
class memorize(dict):
def __init__(self, func):
self.func = func
def __call__(self, *args):
return self[args]
def __missing__(self, key):
result = self[key] = self.func(*key)
return result
样本用途:
>>> @memorize
... def foo(a, b):
... return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
class memorize(dict):
def __init__(self, func):
self.func = func
def __call__(self, *args):
return self[args]
def __missing__(self, key):
result = self[key] = self.func(*key)
return result
Sample uses:
>>> @memorize
... def foo(a, b):
... return a * b
>>> foo(2, 4)
8
>>> foo
{(2, 4): 8}
>>> foo('hi', 3)
'hihihi'
>>> foo
{(2, 4): 8, ('hi', 3): 'hihihi'}
回答 3
Python 3.8 functools.cached_property
装饰器
https://docs.python.org/dev/library/functools.html#functools.cached_property
cached_property
在以下网址中提到了来自Werkzeug的文章:https : //stackoverflow.com/a/5295190/895245,但假设派生的版本将合并到3.8中,真是太棒了。
当没有任何参数时@property
,可以将此装饰器视为缓存或清洁器 @functools.lru_cache
。
文档说:
@functools.cached_property(func)
将类的方法转换为属性,该属性的值将被计算一次,然后在实例生命周期中作为常规属性进行缓存。类似于property(),但增加了缓存。对于实例有效的不可变的昂贵的计算属性很有用。
例:
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = sequence_of_numbers
@cached_property
def stdev(self):
return statistics.stdev(self._data)
@cached_property
def variance(self):
return statistics.variance(self._data)
3.8版的新功能。
注意此装饰器要求每个实例上的dict属性都是可变映射。这意味着它不适用于某些类型,例如元类(因为类型实例上的dict属性是类命名空间的只读代理),以及那些指定槽而不将dict作为已定义槽之一的类(例如此类)根本不提供dict属性)。
Python 3.8 functools.cached_property
decorator
https://docs.python.org/dev/library/functools.html#functools.cached_property
cached_property
from Werkzeug was mentioned at: https://stackoverflow.com/a/5295190/895245 but a supposedly derived version will be merged into 3.8, which is awesome.
This decorator can be seen as caching @property
, or as a cleaner @functools.lru_cache
for when you don’t have any arguments.
The docs say:
@functools.cached_property(func)
Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.
Example:
class DataSet:
def __init__(self, sequence_of_numbers):
self._data = sequence_of_numbers
@cached_property
def stdev(self):
return statistics.stdev(self._data)
@cached_property
def variance(self):
return statistics.variance(self._data)
New in version 3.8.
Note This decorator requires that the dict attribute on each instance be a mutable mapping. This means it will not work with some types, such as metaclasses (since the dict attributes on type instances are read-only proxies for the class namespace), and those that specify slots without including dict as one of the defined slots (as such classes don’t provide a dict attribute at all).
回答 4
Werkzeug有一个cached_property
装饰器(docs,源代码)
Werkzeug has a cached_property
decorator (docs, source)
回答 5
我编码了这个简单的装饰器类以缓存函数响应。我发现它对我的项目非常有用:
from datetime import datetime, timedelta
class cached(object):
def __init__(self, *args, **kwargs):
self.cached_function_responses = {}
self.default_max_age = kwargs.get("default_cache_max_age", timedelta(seconds=0))
def __call__(self, func):
def inner(*args, **kwargs):
max_age = kwargs.get('max_age', self.default_max_age)
if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):
if 'max_age' in kwargs: del kwargs['max_age']
res = func(*args, **kwargs)
self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}
return self.cached_function_responses[func]['data']
return inner
用法很简单:
import time
@cached
def myfunc(a):
print "in func"
return (a, datetime.now())
@cached(default_max_age = timedelta(seconds=6))
def cacheable_test(a):
print "in cacheable test: "
return (a, datetime.now())
print cacheable_test(1,max_age=timedelta(seconds=5))
print cacheable_test(2,max_age=timedelta(seconds=5))
time.sleep(7)
print cacheable_test(3,max_age=timedelta(seconds=5))
I coded this simple decorator class to cache function responses. I find it VERY useful for my projects:
from datetime import datetime, timedelta
class cached(object):
def __init__(self, *args, **kwargs):
self.cached_function_responses = {}
self.default_max_age = kwargs.get("default_cache_max_age", timedelta(seconds=0))
def __call__(self, func):
def inner(*args, **kwargs):
max_age = kwargs.get('max_age', self.default_max_age)
if not max_age or func not in self.cached_function_responses or (datetime.now() - self.cached_function_responses[func]['fetch_time'] > max_age):
if 'max_age' in kwargs: del kwargs['max_age']
res = func(*args, **kwargs)
self.cached_function_responses[func] = {'data': res, 'fetch_time': datetime.now()}
return self.cached_function_responses[func]['data']
return inner
The usage is straightforward:
import time
@cached
def myfunc(a):
print "in func"
return (a, datetime.now())
@cached(default_max_age = timedelta(seconds=6))
def cacheable_test(a):
print "in cacheable test: "
return (a, datetime.now())
print cacheable_test(1,max_age=timedelta(seconds=5))
print cacheable_test(2,max_age=timedelta(seconds=5))
time.sleep(7)
print cacheable_test(3,max_age=timedelta(seconds=5))
回答 6
免责声明:我是kids.cache的作者。
您应该检查kids.cache
,它提供了@cache
可在python 2和python 3上使用的装饰器。没有依赖项,大约100行代码。例如,考虑到您的代码,使用起来非常简单,您可以像这样使用它:
pip install kids.cache
然后
from kids.cache import cache
...
class MyClass(object):
...
@cache # <-- That's all you need to do
@property
def name(self):
return 1 + 1 # supposedly expensive calculation
或者,您可以将@cache
装饰器放在@property
(相同结果)之后。
在属性上使用缓存称为惰性评估,kids.cache
可以做更多的事情(它可以在具有任何参数,属性,任何类型的方法,甚至是类的函数上工作)。对于高级用户,kids.cache
支持cachetools
可为python 2和python 3提供高级缓存存储(LRU,LFU,TTL,RR缓存)。
重要说明:的默认缓存存储区kids.cache
是标准字典,不建议对运行时间长且查询内容不同的长期运行的程序进行存储,因为它会导致缓存存储区的不断增长。对于这种用法,您可以使用例如插入其他缓存存储(@cache(use=cachetools.LRUCache(maxsize=2))
以装饰您的功能/属性/类/方法…)
DISCLAIMER: I’m the author of kids.cache.
You should check kids.cache
, it provides a @cache
decorator that works on python 2 and python 3. No dependencies, ~100 lines of code. It’s very straightforward to use, for instance, with your code in mind, you could use it like this:
pip install kids.cache
Then
from kids.cache import cache
...
class MyClass(object):
...
@cache # <-- That's all you need to do
@property
def name(self):
return 1 + 1 # supposedly expensive calculation
Or you could put the @cache
decorator after the @property
(same result).
Using cache on a property is called lazy evaluation, kids.cache
can do much more (it works on function with any arguments, properties, any type of methods, and even classes…). For advanced users, kids.cache
supports cachetools
which provides fancy cache stores to python 2 and python 3 (LRU, LFU, TTL, RR cache).
IMPORTANT NOTE: the default cache store of kids.cache
is a standard dict, which is not recommended for long running program with ever different queries as it would lead to an ever growing caching store. For this usage you can plugin other cache stores using for instance (@cache(use=cachetools.LRUCache(maxsize=2))
to decorate your function/property/class/method…)
回答 7
嗯,只需要为此找到正确的名称:“ 惰性属性评估 ”。
我也经常这样做。也许我会在代码中使用该配方。
Ah, just needed to find the right name for this: “Lazy property evaluation“.
I do this a lot too; maybe I’ll use that recipe in my code sometime.
回答 8
这里有fastcache,它是“ Python 3 functools.lru_cache的C实现。与标准库相比提供了10-30倍的加速”。
与选择的答案相同,只是导入不同:
from fastcache import lru_cache
@lru_cache(maxsize=128, typed=False)
def f(a, b):
pass
此外,它还安装在Anaconda中,与需要安装的 functools不同。
There is fastcache, which is “C implementation of Python 3 functools.lru_cache. Provides speedup of 10-30x over standard library.”
Same as chosen answer, just different import:
from fastcache import lru_cache
@lru_cache(maxsize=128, typed=False)
def f(a, b):
pass
Also, it comes installed in Anaconda, unlike functools which needs to be installed.
回答 9
回答 10
如果您使用的是Django Framework,则它具有此类属性以缓存API使用的视图或响应,@cache_page(time)
并且还可以有其他选项。
例:
@cache_page(60 * 15, cache="special_cache")
def my_view(request):
...
可以在此处找到更多详细信息。
If you are using Django Framework, it has such a property to cache a view or response of API’s
using @cache_page(time)
and there can be other options as well.
Example:
@cache_page(60 * 15, cache="special_cache")
def my_view(request):
...
More details can be found here.
回答 11
与备忘录示例一起,我找到了以下python软件包:
- 粗暴的 ; 它允许设置ttl和/或缓存函数的调用次数;另外,人们可以使用基于文件的加密缓存…
- 缓存
Along with the Memoize Example I found the following python packages:
- cachepy; It allows to set up ttl and\or the number of calls for cached functions; Also, one can use encrypted file-based cache…
- percache
回答 12
I implemented something like this, using pickle for persistance and using sha1 for short almost-certainly-unique IDs. Basically the cache hashed the code of the function and the hist of arguments to get a sha1 then looked for a file with that sha1 in the name. If it existed, it opened it and returned the result; if not, it calls the function and saves the result (optionally only saving if it took a certain amount of time to process).
That said, I’d swear I found an existing module that did this and find myself here trying to find that module… The closest I can find is this, which looks about right: http://chase-seibert.github.io/blog/2011/11/23/pythondjango-disk-based-caching-decorator.html
The only problem I see with that is it wouldn’t work well for large inputs since it hashes str(arg), which isn’t unique for giant arrays.
It would be nice if there were a unique_hash() protocol that had a class return a secure hash of its contents. I basically manually implemented that for the types I cared about.
回答 13
回答 14
如果您使用的是Django,并且想缓存视图,请参见Nikhil Kumar的答案。
但是,如果要缓存ANY函数结果,则可以使用django-cache-utils。
它重用了Django缓存并提供了易于使用的cached
装饰器:
from cache_utils.decorators import cached
@cached(60)
def foo(x, y=0):
print 'foo is called'
return x+y
If you are using Django and want to cache views, see Nikhil Kumar’s answer.
But if you want to cache ANY function results, you can use django-cache-utils.
It reuses Django caches and provides easy to use cached
decorator:
from cache_utils.decorators import cached
@cached(60)
def foo(x, y=0):
print 'foo is called'
return x+y
回答 15
@lru_cache
不适合使用默认功能值
我的mem
装饰:
import inspect
def get_default_args(f):
signature = inspect.signature(f)
return {
k: v.default
for k, v in signature.parameters.items()
if v.default is not inspect.Parameter.empty
}
def full_kwargs(f, kwargs):
res = dict(get_default_args(f))
res.update(kwargs)
return res
def mem(func):
cache = dict()
def wrapper(*args, **kwargs):
kwargs = full_kwargs(func, kwargs)
key = list(args)
key.extend(kwargs.values())
key = hash(tuple(key))
if key in cache:
return cache[key]
else:
res = func(*args, **kwargs)
cache[key] = res
return res
return wrapper
和测试代码:
from time import sleep
@mem
def count(a, *x, z=10):
sleep(2)
x = list(x)
x.append(z)
x.append(a)
return sum(x)
def main():
print(count(1,2,3,4,5))
print(count(1,2,3,4,5))
print(count(1,2,3,4,5, z=6))
print(count(1,2,3,4,5, z=6))
print(count(1))
print(count(1, z=10))
if __name__ == '__main__':
main()
结果-睡眠只有3次
但这@lru_cache
将是4次,因为:
print(count(1))
print(count(1, z=10))
将被计算两次(默认情况下无效)
@lru_cache
is not perfect with default function values
my mem
decorator:
import inspect
def get_default_args(f):
signature = inspect.signature(f)
return {
k: v.default
for k, v in signature.parameters.items()
if v.default is not inspect.Parameter.empty
}
def full_kwargs(f, kwargs):
res = dict(get_default_args(f))
res.update(kwargs)
return res
def mem(func):
cache = dict()
def wrapper(*args, **kwargs):
kwargs = full_kwargs(func, kwargs)
key = list(args)
key.extend(kwargs.values())
key = hash(tuple(key))
if key in cache:
return cache[key]
else:
res = func(*args, **kwargs)
cache[key] = res
return res
return wrapper
and code for testing:
from time import sleep
@mem
def count(a, *x, z=10):
sleep(2)
x = list(x)
x.append(z)
x.append(a)
return sum(x)
def main():
print(count(1,2,3,4,5))
print(count(1,2,3,4,5))
print(count(1,2,3,4,5, z=6))
print(count(1,2,3,4,5, z=6))
print(count(1))
print(count(1, z=10))
if __name__ == '__main__':
main()
result – only 3 times with sleep
but with @lru_cache
it will be 4 times, because this:
print(count(1))
print(count(1, z=10))
will be calculated twice (bad working with defaults)
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。