Ehcache(2.9.x) - API Developer Guide, Cache Usage Patterns

时间:2023-03-08 17:23:34
Ehcache(2.9.x) - API Developer Guide, Cache Usage Patterns

There are several common access patterns when using a cache. Ehcache supports the following patterns:

  • Cache-aside (or direct manipulation)
  • Cache-as-sor (a combination of read-through and write-through or write-behind patterns)
  • Read-through
  • Write-through
  • Write-behind (or write-back)
  • Copy cache

cache-aside

With the cache-aside pattern, application code uses the cache directly.

This means that application code which accesses the system-of-record (SOR) should consult the cache first, and if the cache contains the data, then return the data directly from the cache, bypassing the SOR.

Otherwise, the application code must fetch the data from the system-of-record, store the data in the cache, and then return it.

When data is written, the cache must be updated with the system-of-record. This results in code that often looks like the following pseudo-code:

public class MyDataAccessClass {
private final Ehcache cache; public MyDataAccessClass(Ehcache cache) {
this.cache = cache;
} /* read some data, check cache first, otherwise read from sor */
public V readSomeData(K key) {
Element element;
if ((element = cache.get(key)) != null) {
return (V) element.getValue();
}
// note here you should decide whether your cache
// will cache 'nulls' or not
if ((value = readDataFromDataStore(key)) != null) {
cache.put(new Element(key, value));
}
return value;
} /* write some data, write to sor, then update cache */
public void writeSomeData(K key, V value) {
writeDataToDataStore(key, value);
cache.put(new Element(key, value));
} }

cache-as-sor

The cache-as-sor pattern implies using the cache as though it were the primary system-of-record (SOR). The pattern delegates SOR reading and writing activities to the cache, so that application code is absolved of this responsibility.

To implement the cache-as-sor pattern, use a combination of the following read and write patterns:

  • read-through
  • write-through or write-behind

Advantages of using the cache-as-sor pattern are:

  • Less cluttered application code (improved maintainability)
  • Choice of write-through or write-behind strategies on a per-cache basis (use only configuration)
  • Allows the cache to solve the "thundering-herd" problem

A disadvantage of using the cache-as-sor pattern is:

  • Less directly visible code-path
public class MyDataAccessClass { 

    private final Ehcache cache; 

    public MyDataAccessClass(Ehcache cache) {
cache.registerCacheWriter(new MyCacheWriter());
this.cache = new SelfPopulatingCache(cache);
} /* read some data - notice the cache is treated as an SOR.
* the application code simply assumes the key will always be available
*/
public V readSomeData(K key) {
return cache.get(key);
} /* write some data - notice the cache is treated as an SOR, it is
* the cache's responsibility to write the data to the SOR.
*/
public void writeSomeData(K key, V value) {
cache.put(new Element(key, value);
}
} /**
* Implement the CacheEntryFactory that allows the cache to provide the
* read-through strategy
*/
private class MyCacheEntryFactory implements CacheEntryFactory {
public Object createEntry(Object key) throws Exception {
return readDataFromDataStore(key);
}
} /**
* Implement the CacheWriter interface which allows the cache to provide the
* write-through or write-behind strategy.
*/
private class MyCacheWriter implements CacheWriter {
public CacheWriter clone(Ehcache cache) throws CloneNotSupportedException {
throw new CloneNotSupportedException();
} public void init() { } void dispose() throws CacheException { } void write(Element element) throws CacheException {
writeDataToDataStore(element.getKey(), element.getValue());
} void writeAll(Collection<Element> elements) throws CacheException {
for (Element element : elements) {
write(element);
}
} void delete(CacheEntry entry) throws CacheException {
deleteDataFromDataStore(element.getKey());
} void deleteAll(Collection<CacheEntry> entries) throws CacheException {
for (Element element : elements) {
delete(element);
}
}
}

read-through

The read-through pattern mimics the structure of the cache-aside pattern when reading data. The difference is that you must implement the CacheEntryFactory interface to instruct the cache how to read objects on a cache miss, and you must wrap the Cache instance with an instance of SelfPopulatingCache.

write-through

The write-through pattern mimics the structure of the cache-aside pattern when writing data. The difference is that you must implement the CacheWriter interface and configure the cache for write-through mode.

A write-through cache writes data to the system-of-record in the same thread of execution. Therefore, in the common scenario of using a database transaction in context of the thread, the write to the database is covered by the transaction in scope. For more details (including configuration settings) about using the write-through pattern, see Write-Through and Write-Behind Caches.

write-behind

The write-behind pattern changes the timing of the write to the system-of-record. Rather than writing to the system-of-record in the same thread of execution, write-behind queues the data for write at a later time.

The consequences of the change from write-through to write-behind are that the data write using write-behind will occur outside of the scope of the transaction.

This often-times means that a new transaction must be created to commit the data to the system-of-record. That transaction is separate from the main transaction. For more details (including configuration settings) about using the write-behind pattern, see Write-Through and Write-Behind Caches.

Copy Cache

A copy cache can have two behaviors: it can copy Element instances it returns, when copyOnRead is true and copy elements it stores, when copyOnWrite to true.

A copy-on-read cache can be useful when you can't let multiple threads access the same Element instance (and the value it holds) concurrently. For example, where the programming model doesn't allow it, or you want to isolate changes done concurrently from each other.

Copy on write also lets you determine exactly what goes in the cache and when (i.e., when the value that will be in the cache will be in state it was when it actually was put in cache). All mutations to the value, or the element, after the put operation will not be reflected in the cache.

A concrete example of a copy cache is a Cache configured for XA. It will always be configured copyOnRead and copyOnWrite to provide proper transaction isolation and clear transaction boundaries (the state the objects are in at commit time is the state making it into the cache). By default, the copy operation will be performed using standard Java object serialization. For some applications, however, this might not be good (or fast) enough. You can configure your own CopyStrategy, which will be used to perform these copy operations. For example, you could easily implement use cloning rather than Serialization.

For more information about copy caches, see “Passing Copies Instead of References” in the Configuration Guide for Ehcache.