Bug 16365

Summary: [cURL] Acid2 test segmentation fault
Product: WebKit Reporter: Luca Bruno <lethalman88>
Component: Page LoadingAssignee: Nobody <webkit-unassigned>
Status: RESOLVED FIXED    
Severity: Normal CC: alp, lethalman88, sandshrew
Priority: P2 Keywords: Curl
Version: 528+ (Nightly build)   
Hardware: PC   
OS: All   
URL: http://www.webstandards.org/files/acid2/test.html
Attachments:
Description Flags
avoid curl crash on bad url
none
queue job cancellations
none
Make Acid2 pass by deferring cancellation oliver: review+

Description Luca Bruno 2007-12-09 02:40:51 PST
This happens just when opening the Acid2 test page.

Program received signal SIGSEGV, Segmentation fault.
0x00002b7f6c79b938 in ?? () from /usr/lib/libcurl.so.4
(gdb) bt
#0  0x00002b7f6c79b938 in ?? () from /usr/lib/libcurl.so.4
#1  0x00002b7f6c79cd85 in curl_multi_perform () from /usr/lib/libcurl.so.4
#2  0x00002b7f69d51d8d in WebCore::ResourceHandleManager::downloadTimerCallback
    () from /usr/lib/libWebKitGtk.so.1
#3  0x00002b7f69c7c6b3 in WebCore::TimerBase::fireTimers ()
   from /usr/lib/libWebKitGtk.so.1
#4  0x00002b7f69c7c977 in WebCore::TimerBase::sharedTimerFired ()
   from /usr/lib/libWebKitGtk.so.1
#5  0x00002b7f69d49992 in WebCore::timeout_cb ()
   from /usr/lib/libWebKitGtk.so.1
#6  0x00002b7f6c4c42eb in g_timeout_dispatch () from /usr/lib/libglib-2.0.so.0
#7  0x00002b7f6c4c57a2 in g_main_context_dispatch ()
   from /usr/lib/libglib-2.0.so.0
#8  0x00002b7f6c4c5fe5 in g_main_context_iterate ()
   from /usr/lib/libglib-2.0.so.0
#9  0x00002b7f6c4c62dd in g_main_loop_run () from /usr/lib/libglib-2.0.so.0
#10 0x00002b7f6afa51a2 in IA__gtk_main () at gtkmain.c:1163
#11 0x0000000000402001 in main ()
Comment 1 Luca Bruno 2007-12-09 03:02:27 PST
I realized that cURL crashes when redirecting to a Location: containing \r character. Is this a WebCore related issue or should we check for that in the curl ResourceHandleManager?
Comment 2 Luca Bruno 2007-12-09 14:56:14 PST
Created attachment 17808 [details]
avoid curl crash on bad url

It's a cURL related issue, it crashes if the new url (a redirect in this case) contains a carriage return \r.
Comment 3 Maciej Stachowiak 2007-12-11 20:07:37 PST
Comment on attachment 17808 [details]
avoid curl crash on bad url

r=me (I'm assuming this is covered by the existing acid 2 layout test)
Comment 4 Alp Toker 2007-12-12 13:12:43 PST
Comment on attachment 17808 [details]
avoid curl crash on bad url

Hey,

As we discussed there's no evidence that this patch works.

Did it work a couple of weeks ago? If so, it'd be interesting to figure out when we regressed so much that this fix doesn't work any more.

Sounds like someone needs to try this patch with a build from a week or so ago and a clean build.
Comment 5 Luca Bruno 2007-12-13 16:17:25 PST
Created attachment 17884 [details]
queue job cancellations

Test it, works for me: http://www.webstandards.org/files/acid2/test.html
Please check for regression on job removal.
Comment 6 Alp Toker 2007-12-13 19:27:03 PST
Created attachment 17885 [details]
Make Acid2 pass by deferring cancellation

This is a simplified fix for the issue found and fixed by Luca's previous patch.

It also prevents data transfer as early as possible to save resources when a job is cancelled.
Comment 7 Oliver Hunt 2007-12-13 19:34:50 PST
Comment on attachment 17885 [details]
Make Acid2 pass by deferring cancellation

r=me
Comment 8 Alp Toker 2007-12-13 19:38:13 PST
Landed in r28709. Thanks Luca!
Comment 9 Luca Bruno 2007-12-14 02:59:19 PST
Oh very cool patch Alp!!! I saw from the patch that it could fix another bug and in fact it did.
If you try opening pages clicking on links while the page is loading, foreign data was displayed instead of the page (e.g. one image), remember? Now seems fixed as well too.
You did a good job.
Comment 10 Andrew Wellington 2007-12-14 03:05:13 PST
*** Bug 16341 has been marked as a duplicate of this bug. ***
Comment 11 zaheer 2007-12-17 06:44:21 PST
hi, in my tests with the patch i see that the cancel for cases where it is not called in the context of multi_peform (e.g. opening a new url on the same page), the previous url request does not end immediately. i was thinking if it would be be more optimized to queue cancel job only in case its called in context of multiperform.

downloadTimerCallback:
m_inMultiPerform = true;
curl_multi_perform..
m_inMultiPerform = false;

cancel:
if (m_inMultiPerform)
   d->cancelled=true
else
   removeFromCurl(job)
   
Comment 12 Luca Bruno 2007-12-17 07:10:00 PST
Looks good, can you create a patch? Thanks.