LruCache是如何实现的
LruCache的关键代码:
public class LruCache implements Cache {
private final Cache delegate;
private Map<Object, Object> keyMap;
private Object eldestKey;
public LruCache(Cache delegate) {
this.delegate = delegate;
setSize(1024);
}
public void setSize(final int size) {
keyMap = new LinkedHashMap<Object, Object>(size, .75F, true) {
private static final long serialVersionUID = 4267176411845948333L;
@Override
protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
boolean tooBig = size() > size;
if (tooBig) {
eldestKey = eldest.getKey();
}
return tooBig;
}
};
}
@Override
public Object getObject(Object key) {
keyMap.get(key);
return delegate.getObject(key);
}
private void cycleKeyList(Object key) {
keyMap.put(key, key);
if (eldestKey != null) {
delegate.removeObject(eldestKey);
eldestKey = null;
}
}
}
linkedHashMap源码分析
双向链表
链表优点
- 插入,删除,新增操作的时间复杂度都是O(1)
- 可以利用不连续的内存空间
链表缺点
- 不能随机查找,随机查找的时间复杂度是o(n)
- 因为要保存前后节点的引用,所以占用的内存空间会变大
双向链表节点
static class Entry<K,V> extends HashMap.Node<K,V> {
Entry<K,V> before, after;
Entry(int hash, K key, V value, Node<K,V> next) {
super(hash, key, value, next);
}
}
移动节点到链表的尾部
当get()被调用了,当前节点就变成最新被访问的了,需要移动到链表的尾部
void afterNodeAccess(Node<K,V> e) {
LinkedHashMap.Entry<K,V> last;
if (accessOrder && (last = tail) != e) {
LinkedHashMap.Entry<K,V> p =
(LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after;
p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a != null)
a.before = b;
else
last = b;
if (last == null)
head = p;
else {
p.before = last;
last.after = p;
}
tail = p;
++modCount;
}
}
为什么要散列表和链表搭配使用
针对链表不能随机访问的缺点,如果使用散列表随机访问时间复杂度为O(1)的有点,两者扬长避短,充分发挥各自的优势
这样一来,可以创建有序的map键值对
public class LinkedHashMap<K,V>
extends HashMap<K,V>
implements Map<K,V>
|