There are two roadmap items which rely on the following items being done prior to starting:
1) To create and store more partition points per output dataset than # of slaves the cluster is currently using.
2) To repartition input datasets efficiently where possible, based on 1)
3) Get thor dynamically choosing numbers of slave processors in-place of current static configuration
4) Allow Thor to run multiple jobs using subsets of the slave 'pool'.
5) Allow Thor to run concurrent jobs on same logical cluster, in most basic case will be equivalent to multi-thor setups we have now.
These are shown as sub tasks below but need more information to be added.
The two roadmap items which are reliant on them are...
Better dynamic spilling - https://track.hpccsystems.com/browse/HPCC-8685
MPI Migration for Thor - https://track.hpccsystems.com/browse/HPCC-8706
Dynamic sizing of clusters - https://track.hpccsystems.com/browse/HPCC-9960
|Repartition input datasets efficiently||Accepted|
|Allow Thor to run multiple jobs using subsets of the slave pool||Accepted|
|Allow Thor to run concurrent jobs on the same logical cluster||Accepted|
|Annotate the files in dali with the sort order and distribution||Accepted|
|Dynamic Thor Resizing||Accepted|