We can substantially reduce IPC and decoding time for large images by allowing some buffering.
Created attachment 237789 [details]
Created attachment 237790 [details]
Comment on attachment 237789 [details]
View in context: https://bugs.webkit.org/attachment.cgi?id=237789&action=review
> + , m_bufferingTimer(this, &NetworkResourceLoader::bufferingTimerFired)
I’m surprised we still only have the static member function version of this, not the std::function version, but I guess that’s where we still are!
> + // FIXME: In reality we always get -1.
Would be nice to have a slightly less mysterious version of this comment. What’s the right thing to fix this, for example? Stop pretending we have the encoded data length?
Also it’s kind of lame design to have an optional number where “I don’t know” is represented by -1. Seems a little old fashioned to me. Anyway, you’re not adding this, just commenting on it.
If this entire thing doesn’t work anyway, then why add m_bufferDataEncodedDataLength at all? All the code that manipulates it seems confusing.
> + if (encodedDataLength > 0)
> + m_bufferDataEncodedDataLength += encodedDataLength;
I don’t understand why it’s OK for m_bufferDataEncodedDataLength to just be wrong when encodedDataLength is -1. There doesn’t seem to be a flag here saying “m_bufferDataEncodedDataLength is meaningless”; instead we are using 0 as a magic number to mean that. It seems better to have a “unknown” value and set m_bufferDataEncodedDataLength to that if we ever get a buffer with a -1.
> - sendBuffer(m_bufferedData.get(), m_bufferedData->size());
> + sendBuffer(m_bufferedData.get(), m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size());
I find this confusing. If we are sending a buffer and we know its size, why would we ever use m_bufferDataEncodedDataLength instead? Would it even be correct to send that instead of the actual buffer size? Would it be helpful in some way? Change log does not shed any light on this.
> + if (m_maximumBufferingTime == 0_ms || m_maximumBufferingTime == std::chrono::milliseconds::max())
> + return;
I am not sure that either of these checks is needed.
I understand that 0ms is a special magic value to say that we should not buffer at all. But it seems that in that case we would not create m_bufferedData and we would not get here. So I’d think we could assert that instead of doing an early return. If we did want an early return, I would suggest checking m_bufferedData rather than checking m_maximumBufferingTime directly.
Do we need the max optimization? It seems sort of elegant to never call start on the timer, but is it important to optimize? Maybe to keep from putting unnecessary timers in the timer heap?
> + size_t encodedDataLength = m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size();
I don’t understand this rule here either. Same questions as above in the sendBuffer function.
I also wish we could share more code between this and NetworkResourceLoader::sendBuffer and its caller. Both are sending the same message with the same rule about encoded data length.
> + , maximumBufferingTime(false)
I think we want 0_ms here, since maximumBufferingTime is a time, not a boolean.
Comment on attachment 237790 [details]
See my comments on the older version of this patch. r=me
(In reply to comment #3)
> If this entire thing doesn’t work anyway, then why add m_bufferDataEncodedDataLength at all? All the code that manipulates it seems confusing.
I think it may work on some platforms. -1 seems to be coming via callbacks from some newish CFNetwork APIs.
I cleaned it up to be more consistent.
> I am not sure that either of these checks is needed.
Got rid of them both. I believe max duration timer does get added to the timer heap but there are probably never enough loads for it to matter. Also we could optimize them away in Timer.
Shame about the title
EFL buildfix landed in http://trac.webkit.org/changeset/173420