| Summary: | GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if libWebRTCCodecsProxy is used | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | WebKit | Reporter: | Chris Dumez <cdumez> | ||||||||
| Component: | Media | Assignee: | Chris Dumez <cdumez> | ||||||||
| Status: | RESOLVED FIXED | ||||||||||
| Severity: | Normal | CC: | darin, eric.carlson, ggaren, jer.noble, peng.liu6, webkit-bug-importer, youennf | ||||||||
| Priority: | P2 | Keywords: | InRadar | ||||||||
| Version: | WebKit Nightly Build | ||||||||||
| Hardware: | Unspecified | ||||||||||
| OS: | Unspecified | ||||||||||
| See Also: |
https://bugs.webkit.org/show_bug.cgi?id=224704 https://bugs.webkit.org/show_bug.cgi?id=224556 https://bugs.webkit.org/show_bug.cgi?id=224728 |
||||||||||
| Bug Depends on: | 224704 | ||||||||||
| Bug Blocks: | |||||||||||
| Attachments: |
|
||||||||||
|
Description
Chris Dumez
2021-04-16 20:30:54 PDT
Created attachment 426316 [details]
Patch
Created attachment 426318 [details]
Patch
Comment on attachment 426318 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=426318&action=review > Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212 > + m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty(); Why is it good to cache this value in a separate boolean? It’s not expensive to compute isEmpty on two hash tables, and you wouldn’t need so many calls to this update function if you dropped this. Is the issue that this function is called on an appropriate thread or with an appropriate lock and allowsExitUnderMemoryPressure needs to run on a different thread? (In reply to Darin Adler from comment #3) > Comment on attachment 426318 [details] > Patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=426318&action=review > > > Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212 > > + m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty(); > > Why is it good to cache this value in a separate boolean? It’s not expensive > to compute isEmpty on two hash tables, and you wouldn’t need so many calls > to this update function if you dropped this. > > Is the issue that this function is called on an appropriate thread or with > an appropriate lock and allowsExitUnderMemoryPressure needs to run on a > different thread? Yes, I tried to explain that in the changelog. The issue is that allowsExitUnderMemoryPressure() gets called on the main thread but m_encoders and m_decoders get modified on a background thread. This is also by I am using a std::atomic. Created attachment 426358 [details]
Patch
(In reply to Chris Dumez from comment #4) > Yes, I tried to explain that in the changelog. The issue is that > allowsExitUnderMemoryPressure() gets called on the main thread but > m_encoders and m_decoders get modified on a background thread. This is also > by I am using a std::atomic. Is there something that prevents a race? What if we are adding entries on a background thread *while* the main thread decides to tell the process to quit? (In reply to Darin Adler from comment #6) > (In reply to Chris Dumez from comment #4) > > Yes, I tried to explain that in the changelog. The issue is that > > allowsExitUnderMemoryPressure() gets called on the main thread but > > m_encoders and m_decoders get modified on a background thread. This is also > > by I am using a std::atomic. > > Is there something that prevents a race? What if we are adding entries on a > background thread *while* the main thread decides to tell the process to > quit? I don't think there is a way to do this GPUProcess idle-exit without races. The GPUProcess could be deciding to exit on the main thread while there is new IPC getting received on the IPC thread. This seems unavoidable. I thought about setting the m_hasEncodersOrDecoders flag to true *before* adding the encoder/decoder to the map instead of after. Maybe this is what you meant. I could do the and it would decrease a little bit the chance of bad idle exit (though not by a lot). Let me know if you'd like me to do that or have a better proposal. If we're unlucky and the GPUProcess exits just as we're starting or about to do work, then the WebProcess will be notified of that the connection was severed and it is the WebProcess's responsibility to re-schedule the work. Note that we only IDLE exit on memory pressure so this frequency of idle-exits and it is similar to a jetsam (that we force because we are currently idle). (In reply to Chris Dumez from comment #7) > (In reply to Darin Adler from comment #6) > > (In reply to Chris Dumez from comment #4) > > > Yes, I tried to explain that in the changelog. The issue is that > > > allowsExitUnderMemoryPressure() gets called on the main thread but > > > m_encoders and m_decoders get modified on a background thread. This is also > > > by I am using a std::atomic. > > > > Is there something that prevents a race? What if we are adding entries on a > > background thread *while* the main thread decides to tell the process to > > quit? > > I don't think there is a way to do this GPUProcess idle-exit without races. > The GPUProcess could be deciding to exit on the main thread while there is > new IPC getting received on the IPC thread. This seems unavoidable. > > I thought about setting the m_hasEncodersOrDecoders flag to true *before* > adding the encoder/decoder to the map instead of after. Maybe this is what > you meant. I could do the and it would decrease a little bit the chance of > bad idle exit (though not by a lot). Let me know if you'd like me to do that > or have a better proposal. > > If we're unlucky and the GPUProcess exits just as we're starting or about to > do work, then the WebProcess will be notified of that the connection was > severed and it is the WebProcess's responsibility to re-schedule the work. > > Note that we only IDLE exit on memory pressure so this frequency of > idle-exits and it is similar to a jetsam (that we force because we are > currently idle). so this *decreases the* frequency of idle-exits Comment on attachment 426358 [details]
Patch
Landing as is for now since we are currently potentially exiting when not idle. I will follow up if needed.
Yes, I didn’t intend my comments to slow you down landing this patch. I was trying to reflect further on whether this can be simplified at all. By the way, I am pretty sure that std::atomic is overkill for something set on one thread and read on another thread, with no guarantees about races. Pretty sure a plain old bool would do fine. commit-queue failed to commit attachment 426358 [details] to WebKit repository. To retry, please set cq+ flag again.
Comment on attachment 426358 [details] Patch Clearing flags on attachment: 426358 Committed r276222 (236704@main): <https://commits.webkit.org/236704@main> All reviewed patches have been landed. Closing bug. (In reply to Darin Adler from comment #10) > Yes, I didn’t intend my comments to slow you down landing this patch. I was > trying to reflect further on whether this can be simplified at all. > > By the way, I am pretty sure that std::atomic is overkill for something set > on one thread and read on another thread, with no guarantees about races. > Pretty sure a plain old bool would do fine. I implemented an alternative proposal at Bug 224728. Please let me know what you think. |