Currently we are allocating shared memory in ChunkedUpdateDrawingArea and ~Proxy for every drawing operation. I would reduce the cost of this by using a permanently existing shared memory backed backing store. This would only created once for a given viewport size.
Created attachment 74113 [details] Patch
The Windows part likely needs some adjustment. I did not removed the UpdateChunk files, that can happen in a follow-up as well as applying the logic for TiledBackingStore.
The patch was tested on Qt-linux and Windows. I have not seen any visual regression one neither one. Layouttests was tried only on Qt-linux and there was no regression.
Attachment 74113 [details] did not build on mac: Build output: http://queues.webkit.org/results/6175024
D'oh! Forgot to remove the deleted files from the project files.
Attachment 74113 [details] did not build on qt: Build output: http://queues.webkit.org/results/6165025
Created attachment 74117 [details] Speculatively build fixed patch. Build fixed patch
Comment on attachment 74117 [details] Speculatively build fixed patch. Missed up -m and --comment ...
Attachment 74113 [details] did not build on win: Build output: http://queues.webkit.org/results/6235018
Comment on attachment 74117 [details] Speculatively build fixed patch. I don't think this is a good idea. You've essentially doubled the backing store memory usage for a static page. A better solution would be to use a pool of shared memory that could be used for update chunks.
(In reply to comment #10) > (From update of attachment 74117 [details]) > I don't think this is a good idea. You've essentially doubled the backing store memory usage for a static page. > > A better solution would be to use a pool of shared memory that could be used for update chunks. That is what we do in Qt right now ,but currently the mechanism we have is not clever enough, most importantly because it does not ever release unused chunks). I have choose this conservative approach because it is simple and it minimizes the number of memory allocations we need. I will investigate in the pooling approach furthermore.
As I see an effective way of polling would need a hash table in shared memory to be able to see if a chunk is free or not at both sides. With this we could minimize not just the number of allocations we need to do but the number of attachments to shared memory. Does it sounds reasonable to you?
(In reply to comment #12) > As I see an effective way of polling would need a hash table in shared memory > to be able to see if a chunk is free or not at both sides. With this we could > minimize not just the number of allocations we need to do but the number > of attachments to shared memory. Does it sounds reasonable to you? Another idea has came to my mind: one pool at each process with message passing capability. One of them can release chunks and inform the other about that. The chunks would have a header (in shared memory) that store the state: free, reserved or attached.
Created attachment 74754 [details] darft patch for feedback - pooling of chunks I am uploading this for feedback, this is not ready. The idea is to reuse chunks for requests with the same or a little bit smaller size. The policy of releasing the chunks is time oriented: release a chunk if it was not used in a given time interval. This means that reusing takes effect when we have a heavy load. I have reworked UpdateChunk to be a common class with only 2 platform specific methods. This was done by using SharedMemory instead of the platform specific shared memory API-s. I have only built that and testedon Qt so the EWS-s will likely be red. I have tested it on Qt-linux and the MiniBrowser was working well. The characteristics of shared memory usage was the expected according to ipcs (you can check this by 'while [1]; do ipcs; sleep 1; done').