Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
Minor
-
Not applicable
Description
Currently the lru cache for nodes uses a vector-based queue to maintain the most recent items, which is updated each time a page is accessed. However I would estimate there are typically ~1000 entries in each cache - which means each time a page is accessed it will copy O(4K) of data.
That is likely to be a significant cost - and it is all within a critical section. A doubly linked list is likely to be much more efficient once N > 50 (fewer operations, but the queue is very cache friendly).
However we need some realistic test cases to be able to profile performance and test potential improvements.
Attachments
Issue Links
- relates to
-
HPCC-25606 Improve behaviour for concurrent requests for the same index page
-
- Resolved
-