Some refactoring to fix a separate bug (https://track.hpccsystems.com/browse/HPCC-20864) introduced this issue.
The symptom of this bug is a crash with a stack like:
#0 0x00007f76568ff18a in CGraphBase::getResult (this=<optimized out>, id=1, distributed=false)
#1 0x00007f765690a1de in CJobChannel::getOwnedResult (this=this@entry=0x9f6e860, gid=26257, ownerId=0, resultId=1)
#2 0x00007f7656f6856d in CSlaveMessageHandler::threadmain (this=0x5211eb0)
The cause/regression is that the results container was being created in the context of the wrong graph, and as a consequence, when the slaves requested the global result and sent a message to the master to do so, it sent the wrong graphId. This then failed to find any results (localResults==null) and crashed.