Bug 136631 - Buffer images on network process side
Summary: Buffer images on network process side
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: Page Loading (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL:
Keywords:
Depends on: 136667 137692
Blocks:
  Show dependency treegraph
 
Reported: 2014-09-08 08:05 PDT by Antti Koivisto
Modified: 2015-11-30 07:45 PST (History)
5 users (show)

See Also:


Attachments
patch (13.08 KB, patch)
2014-09-08 08:26 PDT, Antti Koivisto
no flags Details | Formatted Diff | Diff
patch 2 (13.00 KB, patch)
2014-09-08 09:06 PDT, Antti Koivisto
darin: review+
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Antti Koivisto 2014-09-08 08:05:47 PDT
We can substantially reduce IPC and decoding time for large images by allowing some buffering.
Comment 1 Antti Koivisto 2014-09-08 08:26:21 PDT
Created attachment 237789 [details]
patch
Comment 2 Antti Koivisto 2014-09-08 09:06:06 PDT
Created attachment 237790 [details]
patch 2
Comment 3 Darin Adler 2014-09-08 09:21:46 PDT
Comment on attachment 237789 [details]
patch

View in context: https://bugs.webkit.org/attachment.cgi?id=237789&action=review

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:101
> +    , m_bufferingTimer(this, &NetworkResourceLoader::bufferingTimerFired)

I’m surprised we still only have the static member function version of this, not the std::function version, but I guess that’s where we still are!

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:252
> +        // FIXME: In reality we always get -1.

Would be nice to have a slightly less mysterious version of this comment. What’s the right thing to fix this, for example? Stop pretending we have the encoded data length?

Also it’s kind of lame design to have an optional number where “I don’t know” is represented by -1. Seems a little old fashioned to me. Anyway, you’re not adding this, just commenting on it.

If this entire thing doesn’t work anyway, then why add m_bufferDataEncodedDataLength at all? All the code that manipulates it seems confusing.

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:254
> +        if (encodedDataLength > 0)
> +            m_bufferDataEncodedDataLength += encodedDataLength;

I don’t understand why it’s OK for m_bufferDataEncodedDataLength to just be wrong when encodedDataLength is -1. There doesn’t seem to be a flag here saying “m_bufferDataEncodedDataLength is meaningless”; instead we are using 0 as a magic number to mean that. It seems better to have a “unknown” value and set m_bufferDataEncodedDataLength to that if we ever get a buffer with a -1.

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:270
> -            sendBuffer(m_bufferedData.get(), m_bufferedData->size());
> +            sendBuffer(m_bufferedData.get(), m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size());

I find this confusing. If we are sending a buffer and we know its size, why would we ever use m_bufferDataEncodedDataLength instead? Would it even be correct to send that instead of the actual buffer size? Would it be helpful in some way? Change log does not shed any light on this.

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:416
> +    if (m_maximumBufferingTime == 0_ms || m_maximumBufferingTime == std::chrono::milliseconds::max())
> +        return;

I am not sure that either of these checks is needed.

I understand that 0ms is a special magic value to say that we should not buffer at all. But it seems that in that case we would not create m_bufferedData and we would not get here. So I’d think we could assert that instead of doing an early return. If we did want an early return, I would suggest checking m_bufferedData rather than checking m_maximumBufferingTime directly.

Do we need the max optimization? It seems sort of elegant to never call start on the timer, but is it important to optimize? Maybe to keep from putting unnecessary timers in the timer heap?

> Source/WebKit2/NetworkProcess/NetworkResourceLoader.cpp:428
> +    size_t encodedDataLength = m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size();

I don’t understand this rule here either. Same questions as above in the sendBuffer function.

I also wish we could share more code between this and NetworkResourceLoader::sendBuffer and its caller. Both are sending the same message with the same rule about encoded data length.

> Source/WebKit2/Shared/Network/NetworkResourceLoadParameters.cpp:50
> +    , maximumBufferingTime(false)

I think we want 0_ms here, since maximumBufferingTime is a time, not a boolean.
Comment 4 Darin Adler 2014-09-08 09:22:31 PDT
Comment on attachment 237790 [details]
patch 2

See my comments on the older version of this patch. r=me
Comment 5 Antti Koivisto 2014-09-08 12:46:23 PDT
(In reply to comment #3)
> If this entire thing doesn’t work anyway, then why add m_bufferDataEncodedDataLength at all? All the code that manipulates it seems confusing.

I think it may work on some platforms. -1 seems to be coming via callbacks from some newish CFNetwork APIs.

I cleaned it up to be more consistent.
Comment 6 Antti Koivisto 2014-09-08 12:49:27 PDT
> I am not sure that either of these checks is needed.

Got rid of them both. I believe max duration timer does get added to the timer heap but there are probably never enough loads for it to matter. Also we could optimize them away in Timer.
Comment 7 Antti Koivisto 2014-09-08 13:02:51 PDT
https://trac.webkit.org/r173394
Comment 8 Simon Fraser (smfr) 2014-09-08 13:29:18 PDT
Shame about the title
Comment 9 Antti Koivisto 2014-09-08 13:48:43 PDT
Oops
Comment 10 Csaba Osztrogonác 2014-09-08 23:43:08 PDT
EFL buildfix landed in http://trac.webkit.org/changeset/173420