Bug 224709

Summary: GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if libWebRTCCodecsProxy is used
Product: WebKit Reporter: Chris Dumez <cdumez>
Component: MediaAssignee: Chris Dumez <cdumez>
Status: RESOLVED FIXED    
Severity: Normal CC: darin, eric.carlson, ggaren, jer.noble, peng.liu6, webkit-bug-importer, youennf
Priority: P2 Keywords: InRadar
Version: WebKit Nightly Build   
Hardware: Unspecified   
OS: Unspecified   
See Also: https://bugs.webkit.org/show_bug.cgi?id=224704
https://bugs.webkit.org/show_bug.cgi?id=224556
https://bugs.webkit.org/show_bug.cgi?id=224728
Bug Depends on: 224704    
Bug Blocks:    
Attachments:
Description Flags
Patch
ews-feeder: commit-queue-
Patch
none
Patch none

Description Chris Dumez 2021-04-16 20:30:54 PDT
GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if libWebRTCCodecsProxy is used.
Comment 1 Chris Dumez 2021-04-16 20:34:48 PDT
Created attachment 426316 [details]
Patch
Comment 2 Chris Dumez 2021-04-16 20:38:45 PDT
Created attachment 426318 [details]
Patch
Comment 3 Darin Adler 2021-04-17 17:40:47 PDT
Comment on attachment 426318 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=426318&action=review

> Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212
> +    m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty();

Why is it good to cache this value in a separate boolean? It’s not expensive to compute isEmpty on two hash tables, and you wouldn’t need so many calls to this update function if you dropped this.

Is the issue that this function is called on an appropriate thread or with an appropriate lock and allowsExitUnderMemoryPressure needs to run on a different thread?
Comment 4 Chris Dumez 2021-04-17 17:41:55 PDT
(In reply to Darin Adler from comment #3)
> Comment on attachment 426318 [details]
> Patch
> 
> View in context:
> https://bugs.webkit.org/attachment.cgi?id=426318&action=review
> 
> > Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212
> > +    m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty();
> 
> Why is it good to cache this value in a separate boolean? It’s not expensive
> to compute isEmpty on two hash tables, and you wouldn’t need so many calls
> to this update function if you dropped this.
> 
> Is the issue that this function is called on an appropriate thread or with
> an appropriate lock and allowsExitUnderMemoryPressure needs to run on a
> different thread?

Yes, I tried to explain that in the changelog. The issue is that allowsExitUnderMemoryPressure() gets called on the main thread but m_encoders and m_decoders get modified on a background thread. This is also by I am using a std::atomic.
Comment 5 Chris Dumez 2021-04-17 18:35:27 PDT
Created attachment 426358 [details]
Patch
Comment 6 Darin Adler 2021-04-17 19:54:36 PDT
(In reply to Chris Dumez from comment #4)
> Yes, I tried to explain that in the changelog. The issue is that
> allowsExitUnderMemoryPressure() gets called on the main thread but
> m_encoders and m_decoders get modified on a background thread. This is also
> by I am using a std::atomic.

Is there something that prevents a race? What if we are adding entries on a background thread *while* the main thread decides to tell the process to quit?
Comment 7 Chris Dumez 2021-04-17 20:04:14 PDT
(In reply to Darin Adler from comment #6)
> (In reply to Chris Dumez from comment #4)
> > Yes, I tried to explain that in the changelog. The issue is that
> > allowsExitUnderMemoryPressure() gets called on the main thread but
> > m_encoders and m_decoders get modified on a background thread. This is also
> > by I am using a std::atomic.
> 
> Is there something that prevents a race? What if we are adding entries on a
> background thread *while* the main thread decides to tell the process to
> quit?

I don't think there is a way to do this GPUProcess idle-exit without races. The GPUProcess could be deciding to exit on the main thread while there is new IPC getting received on the IPC thread. This seems unavoidable.

I thought about setting the m_hasEncodersOrDecoders flag to true *before* adding the encoder/decoder to the map instead of after. Maybe this is what you meant. I could do the and it would decrease a little bit the chance of bad idle exit (though not by a lot). Let me know if you'd like me to do that or have a better proposal.

If we're unlucky and the GPUProcess exits just as we're starting or about to do work, then the WebProcess will be notified of that the connection was severed and it is the WebProcess's responsibility to re-schedule the work.

Note that we only IDLE exit on memory pressure so this frequency of idle-exits and it is similar to a jetsam (that we force because we are currently idle).
Comment 8 Chris Dumez 2021-04-17 20:09:51 PDT
(In reply to Chris Dumez from comment #7)
> (In reply to Darin Adler from comment #6)
> > (In reply to Chris Dumez from comment #4)
> > > Yes, I tried to explain that in the changelog. The issue is that
> > > allowsExitUnderMemoryPressure() gets called on the main thread but
> > > m_encoders and m_decoders get modified on a background thread. This is also
> > > by I am using a std::atomic.
> > 
> > Is there something that prevents a race? What if we are adding entries on a
> > background thread *while* the main thread decides to tell the process to
> > quit?
> 
> I don't think there is a way to do this GPUProcess idle-exit without races.
> The GPUProcess could be deciding to exit on the main thread while there is
> new IPC getting received on the IPC thread. This seems unavoidable.
> 
> I thought about setting the m_hasEncodersOrDecoders flag to true *before*
> adding the encoder/decoder to the map instead of after. Maybe this is what
> you meant. I could do the and it would decrease a little bit the chance of
> bad idle exit (though not by a lot). Let me know if you'd like me to do that
> or have a better proposal.
> 
> If we're unlucky and the GPUProcess exits just as we're starting or about to
> do work, then the WebProcess will be notified of that the connection was
> severed and it is the WebProcess's responsibility to re-schedule the work.
> 
> Note that we only IDLE exit on memory pressure so this frequency of
> idle-exits and it is similar to a jetsam (that we force because we are
> currently idle).

so this *decreases the* frequency of idle-exits
Comment 9 Chris Dumez 2021-04-17 21:32:51 PDT
Comment on attachment 426358 [details]
Patch

Landing as is for now since we are currently potentially exiting when not idle. I will follow up if needed.
Comment 10 Darin Adler 2021-04-17 21:34:26 PDT
Yes, I didn’t intend my comments to slow you down landing this patch. I was trying to reflect further on whether this can be simplified at all.

By the way, I am pretty sure that std::atomic is overkill for something set on one thread and read on another thread, with no guarantees about races. Pretty sure a plain old bool would do fine.
Comment 11 EWS 2021-04-17 21:41:45 PDT
commit-queue failed to commit attachment 426358 [details] to WebKit repository. To retry, please set cq+ flag again.
Comment 12 Chris Dumez 2021-04-17 22:23:10 PDT
Comment on attachment 426358 [details]
Patch

Clearing flags on attachment: 426358

Committed r276222 (236704@main): <https://commits.webkit.org/236704@main>
Comment 13 Chris Dumez 2021-04-17 22:23:13 PDT
All reviewed patches have been landed.  Closing bug.
Comment 14 Chris Dumez 2021-04-17 22:56:42 PDT
(In reply to Darin Adler from comment #10)
> Yes, I didn’t intend my comments to slow you down landing this patch. I was
> trying to reflect further on whether this can be simplified at all.
> 
> By the way, I am pretty sure that std::atomic is overkill for something set
> on one thread and read on another thread, with no guarantees about races.
> Pretty sure a plain old bool would do fine.

I implemented an alternative proposal at Bug 224728. Please let me know what you think.
Comment 15 Radar WebKit Bug Importer 2021-04-24 15:06:23 PDT
<rdar://problem/77110075>