Bug 118977 - [gstreamer] webkit gstreamer src blocks future requests to host?
Summary: [gstreamer] webkit gstreamer src blocks future requests to host?
Status: RESOLVED WORKSFORME
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebKitGTK (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL: http://www.xeno-canto.org/
Keywords:
Depends on:
Blocks:
 
Reported: 2013-07-22 11:25 PDT by Jonathon Jongsma (jonner)
Modified: 2018-02-09 10:07 PST (History)
3 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Jonathon Jongsma (jonner) 2013-07-22 11:25:58 PDT
If you visit a page that contains a lot of html5 media elements, it sometimes seems to block all further requests to the same host.

To reproduce in Epiphany:

1. visit http://www.xeno-canto.org, wait for it to finish loading
2. the network tab of the inspector shows a GET request for each media file on the page, but the status of each of these requests is 'Pending'.
3. Click the play button for one or more of the audio files under "Latest Additions" (the audio files generally play fine)
4. click a link to a different page on the same host (e.g. 'Advanced Search' in the site header)

Quite often (though not always), the link will sit in a 'Loading...' state forever and the new link will never be opened. But it doesn't seem entirely deterministic, since it sometimes works.


When it does get into this state, try the following procedure:

1. right-click the 'advanced search' link mentioned above, and select 'Open in a new Tab'
2. The new tab should show a spinning 'Loading' icon but will never actually get loaded.
3. Now close the original tab
4. The new tab will usually immediately finish loading the advanced search page

This makes me suspect that new http requests are getting blocked because of the queue of pending requests that exceeds some per-host limit. Loading pages on external sites is never affected when I run into this issue.

Also, when closing the browser after experiencing this issue, it sometimes takes about 10 seconds for the process to exit, and then prints the following message on the terminal:

(epiphany-browser:11958): libsoup-WARNING **: Cache flush finished despite 4 pending requests
Comment 1 Jonathon Jongsma (jonner) 2013-07-22 11:56:28 PDT
I should mention that the behavior of Chrome is very similar to webkitgtk here (e.g. all media files are listed with a pending GET request in the network tab of the inspector).  But there is a critical difference between webkitgtk and chrome after clicking one of the play buttons:

In webkitgtk+, it seems to re-issue the request for the associated media file (e.g. it gets added to the end of the list of network requests), but it is still listed as 'pending' and shows the size of the resource as 0. The file plays fine, so it must be receiving data, but the inspector seems unaware that any data has been received.

In chrome, when you click a play button, it also sends a GET request for the associated media file, and displays the progress of downloading the file, changing its status to 206 Partial Content and showing the size of the resource.
Comment 2 Jonathon Jongsma (jonner) 2013-07-22 12:06:40 PDT
hm, perhaps this is related to Bug 85994
Comment 3 Philippe Normand 2013-09-02 02:53:04 PDT
Adding Sergio, our soup cache expert :)
Comment 4 Sergio Villar Senin 2013-09-02 03:05:29 PDT
(In reply to comment #3)
> Adding Sergio, our soup cache expert :)

Well the cache warnings are normal if there are requests pending to be resolved, there might be issues related to the cache but in any case it does not block new requests.

I am not sure about the issue tracked by this bug because we're talking about different things. The issue described in comment #1 is likely caused by our not very correct handling of partial responses (Range requests in general), and that's already covered by the bug mentioned there.

Regarding the issue described in the original description I can confirm that I've seen that behaviour many times indeed, and I think the analysis is correct (we're likely hitting a max-per-host connection limit). In any case that's not a problem per-se, because the limits are set in order not to flood the server. The underlaying issue (bad handling of 206 responses) is likely the culprit, so I'd wait for a fix for that issue and then double-check if this is fixed.
Comment 5 Philippe Normand 2018-02-09 10:07:27 PST
I can't manage to reproduce this issue. Please reopen the bug if needed!