Supports multiple eviction strategies, allowing the eviction policy to be configured dynamically.
Supports distributed caching across multiple nodes (sharding).
Handles multi-threading, ensuring thread-safety when multiple threads access and modify the cache.
The solution must:
Handle different eviction strategies (LRU, LFU, FIFO).
Support a distributed cache across multiple nodes (shards).
Be thread-safe to support concurrent access and modifications.
Design Considerations
To meet these requirements, we will use the following techniques:
Eviction Strategy:
Each cache node will support one of the eviction strategies (LRU, LFU, FIFO). The eviction strategy will be configurable per node, allowing different nodes to use different strategies if needed.
Distributed Cache:
The cache will be split into multiple cache nodes (shards). Each node is responsible for storing a subset of the data. The key will be hashed to determine the node where the data will be stored.
Thread-Safety:
Each cache node will use a ConcurrentHashMap to store data, ensuring thread-safe operations for reading.
Write operations will be synchronized using a ReentrantLock to avoid race conditions during cache updates.
Design Patterns:
Strategy Pattern: To allow dynamic swapping of eviction strategies (LRU, LFU, FIFO).
Singleton Pattern: For global cache management.
Factory Pattern: For creating the appropriate eviction strategy based on the configuration.
Sharding: To divide the cache into multiple nodes for distributing the data.
Java Code Implementation
1. Cache Entry and Eviction Strategies
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
// CacheEntry to represent the data in cache
class CacheEntry {
String key;
String value;
long timestamp; // For LRU or FIFO
int frequency; // For LFU
public CacheEntry(String key, String value) {
this.key = key;
this.value = value;
this.timestamp = System.currentTimeMillis();
this.frequency = 0;
}
}
// EvictionStrategy interface
interface EvictionStrategy {
void evict(Map<String, CacheEntry> cache);
}
// LRU Eviction Strategy (Least Recently Used)
class LRUEvictionStrategy implements EvictionStrategy {
public void evict(Map<String, CacheEntry> cache) {
long oldestTimestamp = Long.MAX_VALUE;
String keyToEvict = null;
for (Map.Entry<String, CacheEntry> entry : cache.entrySet()) {
if (entry.getValue().timestamp < oldestTimestamp) {
oldestTimestamp = entry.getValue().timestamp;
keyToEvict = entry.getKey();
}
}
if (keyToEvict != null) {
cache.remove(keyToEvict);
}
}
}
// LFU Eviction Strategy (Least Frequently Used)
class LFUEvictionStrategy implements EvictionStrategy {
public void evict(Map<String, CacheEntry> cache) {
int minFrequency = Integer.MAX_VALUE;
String keyToEvict = null;
for (Map.Entry<String, CacheEntry> entry : cache.entrySet()) {
if (entry.getValue().frequency < minFrequency) {
minFrequency = entry.getValue().frequency;
keyToEvict = entry.getKey();
}
}
if (keyToEvict != null) {
cache.remove(keyToEvict);
}
}
}
// FIFO Eviction Strategy (First In, First Out)
class FIFOEvictionStrategy implements EvictionStrategy {
Queue<String> orderQueue = new LinkedList<>();
public void evict(Map<String, CacheEntry> cache) {
String keyToEvict = orderQueue.poll();
if (keyToEvict != null) {
cache.remove(keyToEvict);
}
}
// For FIFO, we must track the order of insertion
public void addOrder(String key) {
orderQueue.offer(key);
}
}
2. Cache Node (Sharding) and Multi-threading Support
// Cache interface to define basic operations
interface Cache {
String get(String key);
void put(String key, String value);
void setEvictionStrategy(EvictionStrategy strategy);
}
// CacheNode class to represent a single node in the distributed cache
class CacheNode implements Cache {
private Map<String, CacheEntry> cache;
private EvictionStrategy evictionStrategy;
private int capacity;
private Lock lock;
public CacheNode(int capacity, EvictionStrategy evictionStrategy) {
this.cache = new ConcurrentHashMap<>();
this.capacity = capacity;
this.evictionStrategy = evictionStrategy;
this.lock = new ReentrantLock();
}
@Override
public String get(String key) {
CacheEntry entry = cache.get(key);
if (entry != null) {
// Update cache entry timestamp or frequency if required by eviction strategy
if (evictionStrategy instanceof LRUEvictionStrategy) {
entry.timestamp = System.currentTimeMillis();
} else if (evictionStrategy instanceof LFUEvictionStrategy) {
entry.frequency++;
}
return entry.value;
}
return null;
}
@Override
public void put(String key, String value) {
lock.lock(); // Synchronize cache update
try {
if (cache.size() >= capacity) {
evictionStrategy.evict(cache);
}
CacheEntry newEntry = new CacheEntry(key, value);
cache.put(key, newEntry);
if (evictionStrategy instanceof FIFOEvictionStrategy) {
((FIFOEvictionStrategy) evictionStrategy).addOrder(key);
}
} finally {
lock.unlock();
}
}
@Override
public void setEvictionStrategy(EvictionStrategy strategy) {
this.evictionStrategy = strategy;
}
// Accessor for internal cache map, if needed
public Map<String, CacheEntry> getCache() {
return cache;
}
}
3. Distributed Cache Management
// DistributedCache class to manage multiple cache nodes
class DistributedCache {
private List<CacheNode> cacheNodes;
private int capacityPerNode;
public DistributedCache(int totalCapacity, int numberOfNodes, EvictionStrategy evictionStrategy) {
cacheNodes = new ArrayList<>();
capacityPerNode = totalCapacity / numberOfNodes; // Divide capacity across nodes
for (int i = 0; i < numberOfNodes; i++) {
cacheNodes.add(new CacheNode(capacityPerNode, evictionStrategy));
}
}
// Get the appropriate cache node for a given key (hash-based distribution)
private CacheNode getCacheNode(String key) {
int hash = key.hashCode();
int nodeIndex = Math.abs(hash % cacheNodes.size());
return cacheNodes.get(nodeIndex);
}
public String get(String key) {
CacheNode node = getCacheNode(key);
return node.get(key);
}
public void put(String key, String value) {
CacheNode node = getCacheNode(key);
node.put(key, value);
}
public void setEvictionStrategy(EvictionStrategy strategy) {
for (CacheNode node : cacheNodes) {
node.setEvictionStrategy(strategy);
}
}
}
// CacheManager for singleton pattern to manage a global cache
class CacheManager {
private static CacheManager instance;
private DistributedCache distributedCache;
private CacheManager(int totalCapacity, int numberOfNodes, String evictionStrategy) {
this.distributedCache = new DistributedCache(totalCapacity, numberOfNodes, EvictionStrategyFactory.getEvictionStrategy(evictionStrategy));
}
public static CacheManager getInstance(int totalCapacity, int numberOfNodes, String evictionStrategy) {
if (instance == null) {
instance = new CacheManager(totalCapacity, numberOfNodes, evictionStrategy);
}
return instance;
}
public DistributedCache getDistributedCache() {
return distributedCache;
}
}
4. Running Multi-threaded Operations
// CacheManager for singleton pattern to manage a global cache
class DistributedCacheSystemDemo {
public static void main(String[] args) throws InterruptedException {
// Create a distributed cache with 3 nodes, total capacity of 9, and LRU eviction strategy
CacheManager cacheManager = CacheManager.getInstance(9, 3, "LRU");
DistributedCache cache = cacheManager.getDistributedCache();
// Create threads to simulate multi-threaded access
Runnable writerTask = () -> {
for (int i = 0; i < 5; i++) {
String key = "key" + i;
cache.put(key, "value" + i);
System.out.println(Thread.currentThread().getName() + " put: " + key);
}
};
Runnable readerTask = () -> {
for (int i = 0; i < 5; i++) {
String key = "key" + i;
String value = cache.get(key);
System.out.println(Thread.currentThread().getName() + " got: " + key + " = " + value);
}
};
// Simulating multi-threaded operations
Thread writerThread1 = new Thread(writerTask);
Thread writerThread2 = new Thread(writerTask);
Thread readerThread1 = new Thread(readerTask);
Thread readerThread2 = new Thread(readerTask);
writerThread1.start();
writerThread2.start();
readerThread1.start();
readerThread2.start();
writerThread1.join();
writerThread2.join();
readerThread1.join();
readerThread2.join();
}
}
Design Explanation
Eviction Strategy: The EvictionStrategy interface and its implementations (LRU, LFU, FIFO) ensure that eviction logic is decoupled from the cache's core functionality. The strategy can be dynamically swapped at runtime.
Cache Node: Each CacheNode is responsible for managing a subset of the cache data (shard). It implements the Cache interface and uses a ConcurrentHashMap for thread-safe reads and a ReentrantLock for writes.
Distributed Cache: The DistributedCache class manages a list of CacheNodes and routes requests to the appropriate node using consistent hashing. It supports multi-threaded access by synchronizing the cache operations.
Cache Manager: The CacheManager class follows the Singleton pattern, ensuring only one instance of the cache exists globally.
Multi-threading: We simulate multi-threaded operations by creating multiple threads that perform read and write operations on the cache. Synchronization is handled using ConcurrentHashMap and ReentrantLock.
This combined design answers the original question of implementing a configurable cache with eviction strategies, and extends it to a distributed cache system with multi-threading support.