Uploaded image for project: 'HPCC'
  1. HPCC
  2. HPCC-25649

Profile node lru cache and revisit implementation

    XMLWordPrintable

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • JHTree
    • None
    • Minor
    • Not applicable

    Description

      Currently the lru cache for nodes uses a vector-based queue to maintain the most recent items, which is updated each time a page is accessed.  However I would estimate there are typically ~1000 entries in each cache - which means each time a page is accessed it will copy O(4K) of data.

      That is likely to be a significant cost - and it is all within a critical section.  A doubly linked list is likely to be much more efficient once N > 50 (fewer operations, but the queue is very cache friendly).

      However we need some realistic test cases to be able to profile performance and test potential improvements.

       

      Attachments

        Issue Links

          Activity

            People

              ghalliday Gavin Halliday
              ghalliday Gavin Halliday
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: