WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED FIXED
224709
GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if libWebRTCCodecsProxy is used
https://bugs.webkit.org/show_bug.cgi?id=224709
Summary
GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if li...
Chris Dumez
Reported
2021-04-16 20:30:54 PDT
GPUConnectionToWebProcess::allowsExitUnderMemoryPressure() should check if libWebRTCCodecsProxy is used.
Attachments
Patch
(11.86 KB, patch)
2021-04-16 20:34 PDT
,
Chris Dumez
ews-feeder
: commit-queue-
Details
Formatted Diff
Diff
Patch
(11.92 KB, patch)
2021-04-16 20:38 PDT
,
Chris Dumez
no flags
Details
Formatted Diff
Diff
Patch
(11.92 KB, patch)
2021-04-17 18:35 PDT
,
Chris Dumez
no flags
Details
Formatted Diff
Diff
Show Obsolete
(2)
View All
Add attachment
proposed patch, testcase, etc.
Chris Dumez
Comment 1
2021-04-16 20:34:48 PDT
Created
attachment 426316
[details]
Patch
Chris Dumez
Comment 2
2021-04-16 20:38:45 PDT
Created
attachment 426318
[details]
Patch
Darin Adler
Comment 3
2021-04-17 17:40:47 PDT
Comment on
attachment 426318
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=426318&action=review
> Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212 > + m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty();
Why is it good to cache this value in a separate boolean? It’s not expensive to compute isEmpty on two hash tables, and you wouldn’t need so many calls to this update function if you dropped this. Is the issue that this function is called on an appropriate thread or with an appropriate lock and allowsExitUnderMemoryPressure needs to run on a different thread?
Chris Dumez
Comment 4
2021-04-17 17:41:55 PDT
(In reply to Darin Adler from
comment #3
)
> Comment on
attachment 426318
[details]
> Patch > > View in context: >
https://bugs.webkit.org/attachment.cgi?id=426318&action=review
> > > Source/WebKit/GPUProcess/webrtc/LibWebRTCCodecsProxy.mm:212 > > + m_hasEncodersOrDecoders = !m_encoders.isEmpty() || !m_decoders.isEmpty(); > > Why is it good to cache this value in a separate boolean? It’s not expensive > to compute isEmpty on two hash tables, and you wouldn’t need so many calls > to this update function if you dropped this. > > Is the issue that this function is called on an appropriate thread or with > an appropriate lock and allowsExitUnderMemoryPressure needs to run on a > different thread?
Yes, I tried to explain that in the changelog. The issue is that allowsExitUnderMemoryPressure() gets called on the main thread but m_encoders and m_decoders get modified on a background thread. This is also by I am using a std::atomic.
Chris Dumez
Comment 5
2021-04-17 18:35:27 PDT
Created
attachment 426358
[details]
Patch
Darin Adler
Comment 6
2021-04-17 19:54:36 PDT
(In reply to Chris Dumez from
comment #4
)
> Yes, I tried to explain that in the changelog. The issue is that > allowsExitUnderMemoryPressure() gets called on the main thread but > m_encoders and m_decoders get modified on a background thread. This is also > by I am using a std::atomic.
Is there something that prevents a race? What if we are adding entries on a background thread *while* the main thread decides to tell the process to quit?
Chris Dumez
Comment 7
2021-04-17 20:04:14 PDT
(In reply to Darin Adler from
comment #6
)
> (In reply to Chris Dumez from
comment #4
) > > Yes, I tried to explain that in the changelog. The issue is that > > allowsExitUnderMemoryPressure() gets called on the main thread but > > m_encoders and m_decoders get modified on a background thread. This is also > > by I am using a std::atomic. > > Is there something that prevents a race? What if we are adding entries on a > background thread *while* the main thread decides to tell the process to > quit?
I don't think there is a way to do this GPUProcess idle-exit without races. The GPUProcess could be deciding to exit on the main thread while there is new IPC getting received on the IPC thread. This seems unavoidable. I thought about setting the m_hasEncodersOrDecoders flag to true *before* adding the encoder/decoder to the map instead of after. Maybe this is what you meant. I could do the and it would decrease a little bit the chance of bad idle exit (though not by a lot). Let me know if you'd like me to do that or have a better proposal. If we're unlucky and the GPUProcess exits just as we're starting or about to do work, then the WebProcess will be notified of that the connection was severed and it is the WebProcess's responsibility to re-schedule the work. Note that we only IDLE exit on memory pressure so this frequency of idle-exits and it is similar to a jetsam (that we force because we are currently idle).
Chris Dumez
Comment 8
2021-04-17 20:09:51 PDT
(In reply to Chris Dumez from
comment #7
)
> (In reply to Darin Adler from
comment #6
) > > (In reply to Chris Dumez from
comment #4
) > > > Yes, I tried to explain that in the changelog. The issue is that > > > allowsExitUnderMemoryPressure() gets called on the main thread but > > > m_encoders and m_decoders get modified on a background thread. This is also > > > by I am using a std::atomic. > > > > Is there something that prevents a race? What if we are adding entries on a > > background thread *while* the main thread decides to tell the process to > > quit? > > I don't think there is a way to do this GPUProcess idle-exit without races. > The GPUProcess could be deciding to exit on the main thread while there is > new IPC getting received on the IPC thread. This seems unavoidable. > > I thought about setting the m_hasEncodersOrDecoders flag to true *before* > adding the encoder/decoder to the map instead of after. Maybe this is what > you meant. I could do the and it would decrease a little bit the chance of > bad idle exit (though not by a lot). Let me know if you'd like me to do that > or have a better proposal. > > If we're unlucky and the GPUProcess exits just as we're starting or about to > do work, then the WebProcess will be notified of that the connection was > severed and it is the WebProcess's responsibility to re-schedule the work. > > Note that we only IDLE exit on memory pressure so this frequency of > idle-exits and it is similar to a jetsam (that we force because we are > currently idle).
so this *decreases the* frequency of idle-exits
Chris Dumez
Comment 9
2021-04-17 21:32:51 PDT
Comment on
attachment 426358
[details]
Patch Landing as is for now since we are currently potentially exiting when not idle. I will follow up if needed.
Darin Adler
Comment 10
2021-04-17 21:34:26 PDT
Yes, I didn’t intend my comments to slow you down landing this patch. I was trying to reflect further on whether this can be simplified at all. By the way, I am pretty sure that std::atomic is overkill for something set on one thread and read on another thread, with no guarantees about races. Pretty sure a plain old bool would do fine.
EWS
Comment 11
2021-04-17 21:41:45 PDT
commit-queue failed to commit
attachment 426358
[details]
to WebKit repository. To retry, please set cq+ flag again.
Chris Dumez
Comment 12
2021-04-17 22:23:10 PDT
Comment on
attachment 426358
[details]
Patch Clearing flags on attachment: 426358 Committed
r276222
(
236704@main
): <
https://commits.webkit.org/236704@main
>
Chris Dumez
Comment 13
2021-04-17 22:23:13 PDT
All reviewed patches have been landed. Closing bug.
Chris Dumez
Comment 14
2021-04-17 22:56:42 PDT
(In reply to Darin Adler from
comment #10
)
> Yes, I didn’t intend my comments to slow you down landing this patch. I was > trying to reflect further on whether this can be simplified at all. > > By the way, I am pretty sure that std::atomic is overkill for something set > on one thread and read on another thread, with no guarantees about races. > Pretty sure a plain old bool would do fine.
I implemented an alternative proposal at
Bug 224728
. Please let me know what you think.
Radar WebKit Bug Importer
Comment 15
2021-04-24 15:06:23 PDT
<
rdar://problem/77110075
>
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug