LRU(Least Recently Used) Cache is simple and fast algorithm to cache data.

Before we try to know what is LRU cache, let recap why we use cache first. Say we are implementing a web server and we want to improve response time. The first thing come to my head is to use a cache. A cache basicly means instead of querying data from hard disk every time, we save and get data directly in memory. Because we know, accessing memory is much faster than accessing the hard disk. But we also know memory space is much smaller than the hard disk. So we can't save all the data in memory. We should find a way to use the memory efficiently. Now it's time for LRU to make its debut. It's really simple: evict the least recently used data when the cache is full. The method is based on the assumption that recently used data have a higher probability of continued use.

Now let's see how to implement it. We know the fastest way to access data in memory is to use a hash table. So, we use a hash table to save data here. Next, we need to find a way to know where the least recently used data is. We can use a double linked list to save this information. Every time a data is accessed, we make it the head of the list. Every time a data need to be evicted, we choose the tail from the list. So we know what to do now, let do it in code.

```
class DNode {
public key: number = 0;
public value: number = 0;
public pre: DNode | null = null;
public next: DNode | null = null;
}
class DoublyLinkedList {
public head: DNode;
public tail: DNode;
constructor() {
this.head = new DNode();
this.tail = new DNode();
this.head.next = this.tail;
this.tail.pre = this.head;
}
addHead(node: DNode) {
node.pre = this.head;
node.next = this.head.next;
if (this.head.next) {
this.head.next.pre = node;
this.head.next = node;
}
}
deleteNode(node: DNode) {
if (node.pre) {
node.pre.next = node.next;
}
if (node.next) {
node.next.pre = node.pre;
}
}
deleteTail(): DNode | null {
if (this.tail.pre && this.tail.pre !== this.head) {
const node = this.tail.pre;
this.deleteNode(this.tail.pre);
return node;
} else {
return null;
}
}
moveToHead(node: DNode) {
this.deleteNode(node);
this.addHead(node);
}
}
class LRUCache {
private capacity: number;
public m: Map<number, DNode>;
public dlinkedlist: DoublyLinkedList;
constructor(capacity: number) {
this.capacity = capacity;
this.m = new Map();
this.dlinkedlist = new DoublyLinkedList();
}
get(key: number): number {
const node = this.m.get(key);
if (node) {
this.dlinkedlist.moveToHead(node);
return node.value;
} else {
return -1;
}
}
put(key: number, value: number): void {
const node = this.m.get(key);
if (node) {
node.value = value;
this.dlinkedlist.moveToHead(node);
this.get(key);
} else {
if (this.m.size >= this.capacity) {
const node = this.dlinkedlist.deleteTail();
if (node) {
this.m.delete(node.key);
}
}
const node = new DNode();
node.key = key;
node.value = value;
this.dlinkedlist.addHead(node);
this.m.set(key, node);
}
}
}
```

Lastly, don't forget test it in leetcode LRU Cache to see if the implementation is right.