在缓存管理算法中,Lru 几乎是公认的最优的算法。然而它也有一些缺陷,主要是因为:它假定对实体的访问有局部特性。当访问模式没有局部特性的时候,它就会退化为FIFO(先进先出)算法。
在我写一个文件系统的实现时,这种现象很让我头疼,因为很多时候,对一个文件的访问大多是顺序的,前面读取过的内容几乎不会被再次读取。苦思冥想之后,我终于找到了一种方案:
就是在缓存击中率降低时,移动将被换出的缓存结点,当击中率很低时,此算法就变成了LIFO(后进先出)。在顺序访问,和完全随机访问时,比Lru有很大的优越性。而在击中率比较高的时侯,它就是 Lru 算法。
以下是算法的关键代码:
// insert p to lru-list...
// but fixed item can not be swaped(can not goes into Lru List)..
// const named BF_XXX or BM_XXX, see its definition..
void LruMap_Release(LruMap* self, LruMapNode* p)
{
FS_ASSERT(p->refCount);
if (0 != --p->refCount)
return;
#ifdef LRU_WRITE_BACK_AT_RELEASE
if (SIG_TRUE(p->flag, BF_DIRTY))
{
self->write(self, p);
CLR_SIG(p->flag, BF_DIRTY);
}
#endif
if (SIG_TRUE(p->flag, BF_FIXED))
return;
if (SIG_TRUE(p->flag, BF_HITED))
{
INSERT_NODE_NEXT(_M_LruHead, p);
self->last = self->last->prev; // --last, move backward...
}
else // not hited..
{
switch (p->flag & BF_METHOD_MASK)
{
default:
break;
case BM_NOMAL: {
unsigned int i;
LruMapNode* iter;
LruMapNode* headPrev = _M_LruHead->prev;
INSERT_NODE_NEXT(self->last, p);
if (self->last == _M_LruHead)
{
#ifdef _FS_DEBUG
int dgb_only = 0; // for set break point here..
#endif
}
else if (self->tmap.nCount > self->workSetSize)
{
i = 0;
iter = p;
while (i < self->workSetSize)
{ // 'workSetSize' is very small (normally == 4),
// so this cycle will not make a big overhead
if (iter == headPrev)
{ // when hit prob is from high becoming to low,
// in this case, it will not go here..
// when hit prob has been very low,
// it hardly always goes here..
break;
}
iter = iter->next;
++i;
}
if (iter != headPrev)
{ // working set is not full...
// must += 2, if not += 2, old hited item will be swaped out
// the lru-list becomes a queue..
self->last = p->next; // last += 2;
}
}
break; }
case BM_LRU: // insert as most recent used..
INSERT_NODE_NEXT(_M_LruHead, p);
break;
case BM_ACC_ONCE: // insert as most UNrecent used..
INSERT_NODE_PREV(p, _M_LruHead);
break;
}
}
return;
}