the process of making a heap snapshot has many stages. java script engine code... 1) iterates over the heap and collect the heap-objects information 2) fills internal structures 3) calculates retainers graph 4) calculates dominators tree (a map from nodeOrdinal to node's dominator node index) 5) calculates the retained size for the each heap object 6) serializes snapshot data into json format 7) sends the snapshot data chunk by chunk to the front-end Web Inspector code... 7) receives snapshot in json format 8) parses it manually because it can be quite big and doesn't fit into js heap 9) calculates retainers graph (this graph was calculated on js engine side but was not transfered) 10) calculates dominated nodes tree. It is inverted dominators tree. Actually it is an array where each the node is owning a range of values with the dominated nodes indexes. There is an inconsistency. Retainers graph is calculating twice. We discussed this offline and decided to move all the post-processing steps to the front-end. Pros: 1) JSC folks have to implement less code; 2) Back-end will spend less time for taking the snapshot. 3) less transfer size (the difference is 5 uint per node instead of 7) Cons: 1) javascript implementation will be slower than native (At the moment it is 30% slower than native); 2) third party front-ends have to implement their own processing code if they can't use our implementation.
At the moment step 3 already has equivalent step №9 at the front-end upstreamed version of step 4) calculates dominators tree was landed as r117749 with follow-up patches r117786 and r117924
done. downstream patch http://code.google.com/p/v8/source/detail?r=11626