I know it's simple to implement, but I want to reuse something that already exist.
我知道实现它很简单,但是我想重用已经存在的东西。
Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache...
我要解决的问题是,我要为不同的页面、角色加载配置(从XML加载配置,以便缓存它们)……因此,输入的组合可以增长很多(但99%不会)。为了处理这1%的数据,我希望缓存中有一些最大数量的数据项……
Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?
我在apache commons找到了org.apache.common .collections.map. lrumap,它看起来不错,但还想检查其他东西。你有什么推荐吗?
5 个解决方案
#1
89
You can use a LinkedHashMap (Java 1.4+) :
可以使用LinkedHashMap (Java 1.4+):
// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
// This method is called just after a new entry has been added
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
// Add to cache
Object key = "key";
cache.put(key, object);
// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
// Object not in cache. If null is not a possible value in the cache,
// the call to cache.contains(key) is not needed
}
// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
#2
21
This is an old question, but for posterity I wanted to list ConcurrentLinkedHashMap, which is thread safe, unlike LRUMap. Usage is quite easy:
这是一个老问题,但对于后代,我想要列出ConcurrentLinkedHashMap,它是线程安全的,不像LRUMap。使用非常简单:
ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
.maximumWeightedCapacity(1000)
.build();
And the documentation has some good examples, like how to make the LRU cache size-based instead of number-of-items based.
文档中有一些很好的例子,比如如何使基于LRU的缓存大小而不是基于项目的数量。
#3
11
Here is my implementation which lets me keep an optimal number of elements in memory.
这是我的实现,它允许我在内存中保留最优数量的元素。
The point is that I do not need to keep track of what objects are currently being used since I'm using a combination of a LinkedHashMap for the MRU objects and a WeakHashMap for the LRU objects. So the cache capacity is no less than MRU size plus whatever the GC lets me keep. Whenever objects fall off the MRU they go to the LRU for as long as the GC will have them.
重点是,我不需要跟踪当前使用的对象,因为我使用的是MRU对象的LinkedHashMap和LRU对象的WeakHashMap。所以缓存容量不小于MRU大小加上GC允许我保留的东西。每当对象从MRU上掉下来,只要GC有它们,它们就会进入LRU。
public class Cache<K,V> {
final Map<K,V> MRUdata;
final Map<K,V> LRUdata;
public Cache(final int capacity)
{
LRUdata = new WeakHashMap<K, V>();
MRUdata = new LinkedHashMap<K, V>(capacity+1, 1.0f, true) {
protected boolean removeEldestEntry(Map.Entry<K,V> entry)
{
if (this.size() > capacity) {
LRUdata.put(entry.getKey(), entry.getValue());
return true;
}
return false;
};
};
}
public synchronized V tryGet(K key)
{
V value = MRUdata.get(key);
if (value!=null)
return value;
value = LRUdata.get(key);
if (value!=null) {
LRUdata.remove(key);
MRUdata.put(key, value);
}
return value;
}
public synchronized void set(K key, V value)
{
LRUdata.remove(key);
MRUdata.put(key, value);
}
}
#4
1
I also had same problem and I haven't found any good libraries... so I've created my own.
我也有同样的问题,我没有找到任何好的图书馆……所以我创造了我自己的。
simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations
simplelrucache提供线程安全、非常简单、非分布式的LRU缓存和TTL支持。它提供了两种实现
- Concurrent based on ConcurrentLinkedHashMap
- 基于ConcurrentLinkedHashMap并发
- Synchronized based on LinkedHashMap
- 基于LinkedHashMap的同步
You can find it here.
你可以在这里找到。
#5
1
Here is a very simple and easy to use LRU cache in Java. Although it is short and simple it is production quality. The code is explained (look at the README.md) and has some unit tests.
在Java中使用LRU缓存非常简单和容易。虽然它短小精悍,但它是生产质量的保证。代码被解释(查看README.md),并有一些单元测试。
#1
89
You can use a LinkedHashMap (Java 1.4+) :
可以使用LinkedHashMap (Java 1.4+):
// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
// This method is called just after a new entry has been added
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
// Add to cache
Object key = "key";
cache.put(key, object);
// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
// Object not in cache. If null is not a possible value in the cache,
// the call to cache.contains(key) is not needed
}
// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
#2
21
This is an old question, but for posterity I wanted to list ConcurrentLinkedHashMap, which is thread safe, unlike LRUMap. Usage is quite easy:
这是一个老问题,但对于后代,我想要列出ConcurrentLinkedHashMap,它是线程安全的,不像LRUMap。使用非常简单:
ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
.maximumWeightedCapacity(1000)
.build();
And the documentation has some good examples, like how to make the LRU cache size-based instead of number-of-items based.
文档中有一些很好的例子,比如如何使基于LRU的缓存大小而不是基于项目的数量。
#3
11
Here is my implementation which lets me keep an optimal number of elements in memory.
这是我的实现,它允许我在内存中保留最优数量的元素。
The point is that I do not need to keep track of what objects are currently being used since I'm using a combination of a LinkedHashMap for the MRU objects and a WeakHashMap for the LRU objects. So the cache capacity is no less than MRU size plus whatever the GC lets me keep. Whenever objects fall off the MRU they go to the LRU for as long as the GC will have them.
重点是,我不需要跟踪当前使用的对象,因为我使用的是MRU对象的LinkedHashMap和LRU对象的WeakHashMap。所以缓存容量不小于MRU大小加上GC允许我保留的东西。每当对象从MRU上掉下来,只要GC有它们,它们就会进入LRU。
public class Cache<K,V> {
final Map<K,V> MRUdata;
final Map<K,V> LRUdata;
public Cache(final int capacity)
{
LRUdata = new WeakHashMap<K, V>();
MRUdata = new LinkedHashMap<K, V>(capacity+1, 1.0f, true) {
protected boolean removeEldestEntry(Map.Entry<K,V> entry)
{
if (this.size() > capacity) {
LRUdata.put(entry.getKey(), entry.getValue());
return true;
}
return false;
};
};
}
public synchronized V tryGet(K key)
{
V value = MRUdata.get(key);
if (value!=null)
return value;
value = LRUdata.get(key);
if (value!=null) {
LRUdata.remove(key);
MRUdata.put(key, value);
}
return value;
}
public synchronized void set(K key, V value)
{
LRUdata.remove(key);
MRUdata.put(key, value);
}
}
#4
1
I also had same problem and I haven't found any good libraries... so I've created my own.
我也有同样的问题,我没有找到任何好的图书馆……所以我创造了我自己的。
simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations
simplelrucache提供线程安全、非常简单、非分布式的LRU缓存和TTL支持。它提供了两种实现
- Concurrent based on ConcurrentLinkedHashMap
- 基于ConcurrentLinkedHashMap并发
- Synchronized based on LinkedHashMap
- 基于LinkedHashMap的同步
You can find it here.
你可以在这里找到。
#5
1
Here is a very simple and easy to use LRU cache in Java. Although it is short and simple it is production quality. The code is explained (look at the README.md) and has some unit tests.
在Java中使用LRU缓存非常简单和容易。虽然它短小精悍,但它是生产质量的保证。代码被解释(查看README.md),并有一些单元测试。