Many patches were posted for V10. Extending parallelism for the access methods, PARALLEL BITMAP HEAP SCAN and PARALLEL INDEX SCAN were proposed. Next using the parallelism where we can not use partial scan, for example, Inner side of the hash join, PARALLEL SHARED HASH is introduced. On analysis of many queries of TPCH, especially at higher scale factor, we realised that we could have also benefitted with the PARALLEL MERGE JOIN. Finally, to not miss the benefits of the parallelism at the topmost nodes involving aggregates and sorts, the idea of gather-merge was suggested.
All these patches are currently under discussion at PostgreSQL Hackers.
Parallel Bitmap Heap Scan:
This is how plan will change after parallel bitmap scan:
|Plan change with 'Parallel Bitmap Heap Scan'|
As part of this work, Bitmap index will be created by one worker. Once that is ready all worker will jointly scan the heap (page at a time).
Here we can also build the Bitmap in parallel with the help of Parallel Index Scan, but as of now that's not done.
|Performance Results with Under Development Parallel Bitmap|
Parallel Index Scan:
Other operators like parallel index scan and parallel index only scan are also not parallel which leaves very limited choice for the optimizer to select parallelism at the scan node. This can also help for merge join to convert in the parallel join.
The first worker will scan the Btree and reach to leaf nodes after it reaches to the first leaf node all other workers will be woken up and they will do join scan of leaf pages.
The Gather merge will be useful when lower nodes are already partial and expected results by above node or user is ordered. Basically, this will sit on top of the partially sorted node and merge the data from multiple sorted streams. So the final output from gather merge will always be sorted. The Gather Merge node would assume that the results from each worker are ordered, and then do a final merge pass over those.
|Plan change with Gather Merge|
|Performance Results with Under Development Gather merge|
Parallel Shared Hash:
We already have parallel hash join, but in this only outer node will be partial but all workers will create their own complete hash table, That mean that memory and CPU time are wasted. In many cases, that's OK if the hash table contexts are small and cheap. But in cases where hash table is large this design is not very good and we can do much better.
In the new approach, it will be done in 2 steps: In step 1, all worker will participate in building shared hash parallelly. And, once shared hash is ready all worker will move to step 2, the join phase. In this, each worker will perform the partial parallel scan on the outer table and probe into shared hash.
|Plan change with parallel shared hash|
|Performance Results with Under Development Parallel Shared Hash|
Parallel Merge Join:
If the outer node is partial parallel node then we can create a parallel merge join path. The outer scan will be done in parallel by multiple workers and the inner node will be executed by each worker.
If the outer node is not sorted partial node then merge join will add a sort node on top of partial node.
|Plan change with parallel merge join|
There are quite interesting results with combinations of multiple patches, they are still under test with different scale factor. I will update the results once they are available.