When Eclwatch requests the next page of workunits from Sasha, it gives a starting count offset and a page size.
Sasha needs to scan past [ 0 - start ] entries (and ignore them) to get to the right one to start returning the start - start+pagesize workunits.
This is quite inefficient, especially as you descent further through pages.
The matching in Sasha, involves loading up the archive'd workunit's XML of disk and pattern matching it against the search fields.
So by page e.g. 10 - the request is going to have had to trawled through 5000 workunits, to get to the starting point.
What I think it should do, is use the last entry it (Eclwatch) had, as a basis for the request.
From that it can deduce a starting date to use as a primary filter (which will prevent any archived workunits being accessed unless their dates fall within the date/time range.
It will also need to send the unique id of the last entry to Sasha, so it can know exactly where to start, to avoid returning worktunits it has seen before.
e.g. if last entry seen was W20140622-090000.
Then date-to filter can be set to 2014-06-22/09:00:00. There may be multiple workunits with that date, e.g.:
So Eclwatch will need to send the workunit id, so Sasha can use it as a starting point and filter up to it within the date range.