guava collection/cache初探

时间:2023-03-09 03:53:11
guava collection/cache初探

写了上面一篇,看了点eventbus相关的guava代码后,发现里面用到了很多其他guava包里的方法,所以顺着看一下,比如之前用到的map都是guava自己的

Multimap:可以包含有几个重复Key的value,你可以put进入多个不同value但是相同的key,但是又不是让后面覆盖前面的内容。

Table:二维矩阵类似,有需要这类结构可以参考

Hash:还有一堆已经实现好了的哈希方法,如果有需要可以直接用

有些代码也可以从中参考

对于Cache首先写一个例子,一般缓存肯定要考虑的是怎么存,存多久,怎么过期,初始大小,最大大小等等

        LoadingCache<String, String> cache = CacheBuilder.newBuilder()
.expireAfterWrite(1000, TimeUnit.MILLISECONDS)
.expireAfterAccess(1000, TimeUnit.MILLISECONDS)
.concurrencyLevel(8)
.initialCapacity(100)
.maximumSize(100)
.weakKeys()
.weakValues()
.build(new CacheLoader<String, String>() {
@Override
public String load(String key) throws Exception {
return "test";
}
});

其中参数大部分一看名字就知道,这里我觉得比较特殊的是这个,看注释就明白了

guava collection/cache初探

  @GwtIncompatible // java.lang.ref.WeakReference
public CacheBuilder<K, V> weakKeys() {
return setKeyStrength(Strength.WEAK);
}
    WEAK {
@Override
<K, V> ValueReference<K, V> referenceValue(
Segment<K, V> segment, ReferenceEntry<K, V> entry, V value, int weight) {
return (weight == 1)
? new WeakValueReference<K, V>(segment.valueReferenceQueue, value, entry)
: new WeightedWeakValueReference<K, V>(
segment.valueReferenceQueue, value, entry, weight);
} @Override
Equivalence<Object> defaultEquivalence() {
return Equivalence.identity();
}
};

put放入元素,跟一般的没多大区别,就是计算下hash,找到对应的index放进去,就是map的一般原理

  @Override
public V put(K key, V value) {
checkNotNull(key);
checkNotNull(value);
int hash = hash(key);
return segmentFor(hash).put(key, hash, value, false);
}

来看具体的put,起始也比较好懂

    @Nullable
V put(K key, int hash, V value, boolean onlyIfAbsent) {
lock();
try {
long now = map.ticker.read();
preWriteCleanup(now); int newCount = this.count + 1;
if (newCount > this.threshold) { // ensure capacity
expand();
newCount = this.count + 1;
} AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
int index = hash & (table.length() - 1);
ReferenceEntry<K, V> first = table.get(index); // Look for an existing entry.
for (ReferenceEntry<K, V> e = first; e != null; e = e.getNext()) {
K entryKey = e.getKey();
if (e.getHash() == hash
&& entryKey != null
&& map.keyEquivalence.equivalent(key, entryKey)) {
// We found an existing entry. ValueReference<K, V> valueReference = e.getValueReference();
V entryValue = valueReference.get(); if (entryValue == null) {
++modCount;
if (valueReference.isActive()) {
enqueueNotification(
key, hash, entryValue, valueReference.getWeight(), RemovalCause.COLLECTED);
setValue(e, key, value, now);
newCount = this.count; // count remains unchanged
} else {
setValue(e, key, value, now);
newCount = this.count + 1;
}
this.count = newCount; // write-volatile
evictEntries(e);
return null;
} else if (onlyIfAbsent) {
// Mimic
// "if (!map.containsKey(key)) ...
// else return map.get(key);
recordLockedRead(e, now);
return entryValue;
} else {
// clobber existing entry, count remains unchanged
++modCount;
enqueueNotification(
key, hash, entryValue, valueReference.getWeight(), RemovalCause.REPLACED);
setValue(e, key, value, now);
evictEntries(e);
return entryValue;
}
}
} // Create a new entry.
++modCount;
ReferenceEntry<K, V> newEntry = newEntry(key, hash, first);
setValue(newEntry, key, value, now);
table.set(index, newEntry);
newCount = this.count + 1;
this.count = newCount; // write-volatile
evictEntries(newEntry);
return null;
} finally {
unlock();
postWriteCleanup();
}
}
    void runLockedCleanup(long now) {
if (tryLock()) {
try {
drainReferenceQueues();
expireEntries(now); // calls drainRecencyQueue
readCount.set(0);
} finally {
unlock();
}
}
}

再看下get

  @Override
public @Nullable V get(@Nullable Object key) {
if (key == null) {
return null;
}
int hash = hash(key);
return segmentFor(hash).get(key, hash);
}
    @Nullable
V get(Object key, int hash) {
try {
if (count != 0) { // read-volatile
long now = map.ticker.read();
ReferenceEntry<K, V> e = getLiveEntry(key, hash, now);
if (e == null) {
return null;
} V value = e.getValueReference().get();
if (value != null) {
recordRead(e, now);
return scheduleRefresh(e, e.getKey(), hash, value, now, map.defaultLoader);
}
tryDrainReferenceQueues();
}
return null;
} finally {
postReadCleanup();
}
}
    /**
* Records the relative order in which this read was performed by adding {@code entry} to the
* recency queue. At write-time, or when the queue is full past the threshold, the queue will be
* drained and the entries therein processed.
*
* <p>Note: locked reads should use {@link #recordLockedRead}.
*/
void recordRead(ReferenceEntry<K, V> entry, long now) {
if (map.recordsAccess()) {
entry.setAccessTime(now);
}
recencyQueue.add(entry);
}