RESOLVED FIXED 149749
[GTK] Update jhbuild's version of harfbuzz
https://bugs.webkit.org/show_bug.cgi?id=149749
Summary [GTK] Update jhbuild's version of harfbuzz
Mario Sanchez Prada
Reported 2015-10-02 09:01:23 PDT
Today I've tried to build latests WebKit in a machine where I had a pristine installation of Debian Testing (x86_64), and found this when building harfbuzz: hb-icu-le.cc: In function 'PortableFontInstance* _hb_icu_le_shaper_font_data_create(hb_font_t*)': hb-icu-le.cc:68:16: error: invalid new-expression of abstract class type 'PortableFontInstance' status); ^ In file included from hb-icu-le.cc:31:0: hb-icu-le/PortableFontInstance.h:32:7: note: because the following virtual functions are pure within 'PortableFontInstance': class PortableFontInstance : public LEFontInstance, protected FontTableCache ^ In file included from hb-icu-le/PortableFontInstance.h:23:0, from hb-icu-le.cc:31: /usr/include/x86_64-linux-gnu/layout/LEFontInstance.h:165:25: note: virtual const void* icu_55::LEFontInstance::getFontTable(LETag, size_t&) const virtual const void* getFontTable(LETag tableTag, size_t &length) const = 0; ^ Makefile:1140: recipe for target 'libharfbuzz_la-hb-icu-le.lo' failed make[4]: *** [libharfbuzz_la-hb-icu-le.lo] Error 1 make[4]: Leaving directory '/home/mario/work/WebKit/WebKitBuild/DependenciesGTK/Source/harfbuzz-0.9.14/src' Apparently we have a pretty old version of harfbuzz there (0.9.14, from 2013) and just upgrading to a newer one fixes the problem, so I'm proposing upgrading to 0.9.35, which is the one that is in debian stable anyway[1] [1] https://packages.debian.org/search?keywords=harfbuzz&searchon=names&suite=stable&section=all
Attachments
Patch proposal (1.65 KB, patch)
2015-10-02 09:14 PDT, Mario Sanchez Prada
no flags
Patch proposal (1.65 KB, patch)
2015-10-06 01:40 PDT, Mario Sanchez Prada
mrobinson: review+
Mario Sanchez Prada
Comment 1 2015-10-02 09:14:20 PDT
Created attachment 262335 [details] Patch proposal Updated jhbuild.modules to point to 0.9.35, please review. Thanks!
Carlos Alberto Lopez Perez
Comment 2 2015-10-03 04:59:04 PDT
Seems the GTK EWS failed with: [42/90] Linking CXX executable bin/TestWebKitAPI/WebCore/TestWebCore FAILED: : && /usr/lib/ccache/c++ -fPIC -std=c++11 -std=c++11 -O3 -DNDEBUG -fno-exceptions -fno-strict-aliasing -fno-rtti -fno-exceptions -fno-strict-aliasing -fno-rtti -Wl,--no-undefined -L/home/slave/WebKit/WebKitBuild/DependenciesGTK/Root/lib64 -fuse-ld=gold -Wl,--disable-new-dtags -fuse-ld=gold -Wl,--disable-new-dtags -shared -Wl,-soname,libwebkit2gtk-4.0.so.37 -o lib/libwebkit2gtk-4.0.so.37.10.0 @CMakeFiles/WebKit2.rsp && : lib/libWebCoreGTK.a(lib/../Source/WebCore/CMakeFiles/WebCore.dir/platform/graphics/harfbuzz/HarfBuzzShaper.cpp.o):HarfBuzzShaper.cpp:function WebCore::HarfBuzzShaper::shapeHarfBuzzRuns(bool): error: undefined reference to 'hb_icu_get_unicode_funcs' lib/libWebCoreGTK.a(lib/../Source/WebCore/CMakeFiles/WebCore.dir/platform/graphics/harfbuzz/HarfBuzzShaper.cpp.o):HarfBuzzShaper.cpp:function WebCore::HarfBuzzShaper::collectHarfBuzzRuns(): error: undefined reference to 'hb_icu_script_to_script' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed. Not sure what this means...
Mario Sanchez Prada
Comment 3 2015-10-05 03:55:25 PDT
Which version of ICU is installed in the bots? libicu is out of the jhbuild and so it could be that the issue is coming from there. For the record, this is what I have in my pristine Debian Testing installation: $ dpkg -l | grep icu ii icu-devtools 55.1-5 amd64 Development utilities for International Components for Unicode ii libharfbuzz-icu0:amd64 1.0.1-1+b1 amd64 OpenType text shaping engine ICU backend ii libicu-dev:amd64 55.1-5 amd64 Development files for International Components for Unicode ii libicu55:amd64 55.1-5 amd64 International Components for Unicode
Carlos Alberto Lopez Perez
Comment 4 2015-10-05 04:19:43 PDT
(In reply to comment #3) > Which version of ICU is installed in the bots? libicu is out of the jhbuild > and so it could be that the issue is coming from there. > > For the record, this is what I have in my pristine Debian Testing > installation: > > $ dpkg -l | grep icu > ii icu-devtools 55.1-5 > amd64 Development utilities for International Components for Unicode > ii libharfbuzz-icu0:amd64 1.0.1-1+b1 > amd64 OpenType text shaping engine ICU backend > ii libicu-dev:amd64 55.1-5 > amd64 Development files for International Components for Unicode > ii libicu55:amd64 55.1-5 > amd64 International Components for Unicode libicu-dev=52.1-8 (from Debian stable). $ dpkg -l | grep icu ii icu-devtools 52.1-8 amd64 Development utilities for International Components for Unicode ii libharfbuzz-icu0:amd64 0.9.35-2 amd64 OpenType text shaping engine ICU backend ii libicu-dev:amd64 52.1-8 amd64 Development files for International Components for Unicode ii libicu48:amd64 4.8.1.1-10 amd64 International Components for Unicode ii libicu52:amd64 52.1-8 amd64 International Components for Unicode
Mario Sanchez Prada
Comment 5 2015-10-05 04:46:17 PDT
So it has an older version indeed. Any chance that harfbuzz got configured without the ICU unicode callbac? I've checked and this missing functions are defined on src/hb-icu.h, which should be built if HAVE_ICU is defined. I think it would be great if you could check the config.log in the bots for the WebKitBuild/DependenciesGTK/Source/harfbuzz-0.9.35 directory, see what it did.
Mario Sanchez Prada
Comment 6 2015-10-05 08:18:18 PDT
By the way, trying to answer Martin's questions on IRC the other day, I did run the tests locally and, even though most of the tests passed, I got around ~230 tests crashing plus a bunch of failures (~360 tests not passing in total), suggesting that something is not quite right, as the build bots don't show that amount of failures at all. It might be something my environment though, so I'm posting below a link with a tarball containing the results of the tests I run, in case someone wants to take a look while I don't get time to look deeper into his issue: https://drive.google.com/file/d/0B6Gdj3EoWfFLRFNSejRIMlViY28/view?usp=sharing
Carlos Alberto Lopez Perez
Comment 7 2015-10-05 08:44:19 PDT
(In reply to comment #5) > So it has an older version indeed. Any chance that harfbuzz got configured > without the ICU unicode callbac? I've checked and this missing functions are > defined on src/hb-icu.h, which should be built if HAVE_ICU is defined. > > I think it would be great if you could check the config.log in the bots for > the WebKitBuild/DependenciesGTK/Source/harfbuzz-0.9.35 directory, see what > it did. * This is from "GTK EWS":$WebKit/DependenciesGTK/Source/harfbuzz-0.9.14/config.log http://sprunge.us/eNMK
Carlos Alberto Lopez Perez
Comment 8 2015-10-05 08:49:02 PDT
(In reply to comment #6) > By the way, trying to answer Martin's questions on IRC the other day, I did > run the tests locally and, even though most of the tests passed, I got > around ~230 tests crashing plus a bunch of failures (~360 tests not passing > in total), suggesting that something is not quite right, as the build bots > don't show that amount of failures at all. > > It might be something my environment though, so I'm posting below a link > with a tarball containing the results of the tests I run, in case someone > wants to take a look while I don't get time to look deeper into his issue: > > https://drive.google.com/file/d/0B6Gdj3EoWfFLRFNSejRIMlViY28/view?usp=sharing Could be caused by another library or issue on your environment. What I suggest to do is: 1) Run all the layout tests twice or three times. Get the results that repeat (to discard flakies) 2) Apply your patch. 3) Run the tests again twice or three times. Get the results that repeat (to discard flakies) 4) Compare the diff between 3 and 1.
Mario Sanchez Prada
Comment 9 2015-10-05 09:11:11 PDT
(In reply to comment #7) > * This is from "GTK > EWS":$WebKit/DependenciesGTK/Source/harfbuzz-0.9.14/config.log > > http://sprunge.us/eNMK Thanks. So harfbuzz is compiled with ICU support, thus it has to be WebKit not picking up the right flags, as these functions have been there for a long time already. Btw, it looks similar/related to these older bugs: https://bugs.webkit.org/show_bug.cgi?id=108174 https://bugs.webkit.org/show_bug.cgi?id=116978 Perhaps all EWS needs is a fresh rebuild after all? I can see the EWS is named ltilve-gtk-ews. Adding Loren into the loop, in case he can help.
Mario Sanchez Prada
Comment 10 2015-10-05 09:34:52 PDT
(In reply to comment #8) > Could be caused by another library or issue on your environment. > > What I suggest to do is: > > 1) Run all the layout tests twice or three times. Get the results that > repeat (to discard flakies) > 2) Apply your patch. > 3) Run the tests again twice or three times. Get the results that repeat (to > discard flakies) > 4) Compare the diff between 3 and 1. Just to be clear, there is no patch to be applied here: all I did was to update the version of harfbuzz because otherwise I could simply not build WebKit :) I could try to downgrade my version of libicu to match the one in the bots, sure, but before doing that I think it would be worth simply trying forcing a full rebuild in the EWS bot, or even some hands-on checks in there, because it still does feel like upgrading harfbuzz from that 2-years old version to the one shipped by Debian stable might not be a bad idea. Anyway, as I said most of the tests pass anyway, so those timeouts and crashes might be something happening in my machine but perhaps they don't happen in the bots. On this regard, it would be interesting (if possible) to manually run the tests in the EWS or a similar environment with the new version of harfbuzz, to be sure. Is that an option?
Carlos Alberto Lopez Perez
Comment 11 2015-10-05 12:04:22 PDT
> Perhaps all EWS needs is a fresh rebuild after all? > Could be. Let me try to check to reproduce the failure with your patch and see if a clean build fixes it. > I can see the EWS is named ltilve-gtk-ews. Adding Loren into the loop, in > case he can help. ltilve EWS is actually the one I pasted the config from.
Carlos Alberto Lopez Perez
Comment 12 2015-10-05 17:12:42 PDT
(In reply to comment #11) > > Perhaps all EWS needs is a fresh rebuild after all? > > > > Could be. > > Let me try to check to reproduce the failure with your patch and see if a > clean build fixes it. Confirmed. I was able to reproduce the build failure, and after a clean build, it built just fine. So A clean build will fix the issue. So I guess is enough with triggering a clean build on the bots after landing this patch.
Mario Sanchez Prada
Comment 13 2015-10-06 01:40:52 PDT
Created attachment 262508 [details] Patch proposal (In reply to comment #12) > Confirmed. > > I was able to reproduce the build failure, and after a clean build, it built > just fine. > > So A clean build will fix the issue. > > So I guess is enough with triggering a clean build on the bots after landing > this patch. Thanks for confirming this, Carlos. Loren has just ensured that the EWS will get a clean build for the next patch, so I'm uploading it again now to trigger that. Let's see.
Carlos Alberto Lopez Perez
Comment 14 2015-10-06 03:55:55 PDT
(In reply to comment #13) > Created attachment 262508 [details] > Patch proposal > > (In reply to comment #12) > > Confirmed. > > > > I was able to reproduce the build failure, and after a clean build, it built > > just fine. > > > > So A clean build will fix the issue. > > > > So I guess is enough with triggering a clean build on the bots after landing > > this patch. > > Thanks for confirming this, Carlos. Loren has just ensured that the EWS will > get a clean build for the next patch, so I'm uploading it again now to > trigger that. > > Let's see. This is very difficult to ensure, because you can't select which patch is going to process the EWS. Also the EWS first builds current trunk, then applies your patch, then rebuilds. So the EWS is going to fail always (i think) So my recommendation is to just land it and after landing it trigger a clean build on all the GTK bots (also the EFL ones if they are affected) from the https://build.webkit.org/ interface. And ping me or lorenzo to trigger a clean build on the EWS.
Mario Sanchez Prada
Comment 15 2015-10-06 07:27:48 PDT
(In reply to comment #14) > [...] > This is very difficult to ensure, because you can't select which patch is > going to process the EWS. Also the EWS first builds current trunk, then > applies your patch, then rebuilds. So the EWS is going to fail always (i > think) Argh! Did not know that :/ > So my recommendation is to just land it and after landing it trigger a clean > build on all the GTK bots (also the EFL ones if they are affected) from the > https://build.webkit.org/ interface. Sounds like a plan, but I'd rather get someone else's opinion before landing this. I'm still concerned about the ~230 crashes I got when running the tests myself locally. > And ping me or lorenzo to trigger a clean build on the EWS. I'll do, thanks!
Michael Catanzaro
Comment 16 2015-10-06 07:50:31 PDT
My opinion is that nobody uses WebKitGTK+ with ancient harfbuzz except the test bots, so if upgrading harfbuzz introduces crashes, then all our users are getting those crashes. Better we have 230 expected crashes than not realize the crashes are there....
Carlos Alberto Lopez Perez
Comment 17 2015-10-06 08:14:54 PDT
I tested to run the full layout tests before and after this patch: * Before Regressions: Unexpected text-only failures (52) Regressions: Unexpected image-only failures (32) Regressions: Unexpected crashes (38) Regressions: Unexpected timeouts (93) Regressions: Unexpected missing results (9) * After Regressions: Unexpected text-only failures (50) Regressions: Unexpected image-only failures (31) Regressions: Unexpected crashes (39) Regressions: Unexpected timeouts (77) Regressions: Unexpected missing results (9) So, I don't see anything that should trigger an alarm. The results are more or less similar.
Carlos Alberto Lopez Perez
Comment 18 2015-10-06 08:16:36 PDT
(In reply to comment #15) > I'm still concerned about the ~230 crashes I got when running the tests myself locally. Indeed, it would be great if you can find what is causing such high amount of crashes. I don't think is harfbuzz.
Michael Catanzaro
Comment 19 2015-10-06 08:37:31 PDT
FYI, last time I got around ~200 crashes, it was because install-dependencies did not install some essential Apache module (mod_ssl?) on Fedora. I don't remember if I fixed that or not.
Mario Sanchez Prada
Comment 20 2015-10-07 03:32:44 PDT
(In reply to comment #16) > My opinion is that nobody uses WebKitGTK+ with ancient harfbuzz except the > test bots, so if upgrading harfbuzz introduces crashes, then all our users > are getting those crashes. Better we have 230 expected crashes than not > realize the crashes are there.... That's my opinion too, but was also concerned that something is missing/wrong with my environment, causing those extra crashes, which made me feel unsure of pushing for this. However... (In reply to comment #17) > I tested to run the full layout tests before and after this patch: > > * Before > Regressions: Unexpected text-only failures (52) > Regressions: Unexpected image-only failures (32) > Regressions: Unexpected crashes (38) > Regressions: Unexpected timeouts (93) > Regressions: Unexpected missing results (9) > > * After > Regressions: Unexpected text-only failures (50) > Regressions: Unexpected image-only failures (31) > Regressions: Unexpected crashes (39) > Regressions: Unexpected timeouts (77) > Regressions: Unexpected missing results (9) > > > So, I don't see anything that should trigger an alarm. The results are more > or less similar. ...it seems that Carlos (thanks for checking!) is not getting that big of a difference, so it looks like it should be safe to push the newer version after all. (In reply to comment #18) > (In reply to comment #15) > > I'm still concerned about the ~230 crashes I got when running the tests myself locally. > > Indeed, it would be great if you can find what is causing such high amount > of crashes. I don't think is harfbuzz. Me neither. For what it's worth, here is the full backtrace of one of those crashes, from one of the multiple media tests failing (media/video-zoom.html): http://fpaste.org/275743/42094311 Just in case it could be drivers related, I moved back to Nouveau (was using NVIDIA drivers before), but could not see any difference. Calvaris also pointed out that it might be related to VDPAU messing up with this, so I checked and found that gstreamer-plugins-bad was being built without support for that extension in the internal jhbuild, so I installed libvdpau-dev, rebuild gst-plugins-bad and reinstalled it. But no difference either. This said, I'm running a bit out of time with other commitments so, even though I will still try to find what's crashing here later on (otherwise I'm pretty much screwed to run the tests on a general basis) I think I'm going to put this off for a little while, knowing that at least it's ok in the bots. (In reply to comment #19) > FYI, last time I got around ~200 crashes, it was because > install-dependencies did not install some essential Apache module (mod_ssl?) > on Fedora. I don't remember if I fixed that or not. Interesting. For the record, in case it rings any bell, this is what I have installed in my machine: $ dpkg -l | grep apache ii apache2 2.4.16-3 amd64 Apache HTTP Server ii apache2-bin 2.4.16-3 amd64 Apache HTTP Server (modules and other binary files) ii apache2-data 2.4.16-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.16-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-bw 0.92-11 amd64 bandwidth limiting module for apache2 ii libapache2-mod-dnssd 0.6-3.1 amd64 Zeroconf support for Apache 2 via avahi ii libapache2-mod-php5 5.6.13+dfsg-2 amd64 server-side, HTML-embedded scripting language (Apache 2 module) Moving the patch to r? now, as per Carlos comments. Please review. Thanks!
Carlos Alberto Lopez Perez
Comment 21 2015-10-07 03:54:44 PDT
(In reply to comment #20) > Me neither. For what it's worth, here is the full backtrace of one of those > crashes, from one of the multiple media tests failing > (media/video-zoom.html): > > http://fpaste.org/275743/42094311 > It seems to crash in something gstreamer related. > Just in case it could be drivers related, I moved back to Nouveau (was using > NVIDIA drivers before), but could not see any difference. > If you are using our internal jhbuild (which you should for running the layout tests), then the graphics driver you use shouldn't matter as we built mesa's gallium-llvmpipe software rasterizer and we preload it before running the tests, so all the tests should be running using mesa's gallium-llvmpipe driver. > Calvaris also pointed out that it might be related to VDPAU messing up with > this, so I checked and found that gstreamer-plugins-bad was being built > without support for that extension in the internal jhbuild, so I installed > libvdpau-dev, rebuild gst-plugins-bad and reinstalled it. But no difference > either. > That could be a possible explanation. To discard that is not something related with your drivers, can you try to run this test after exporting the following environment variable? export USE_NATIVE_XDISPLAY=1 That will run the tests on your current X session and won't preload mesa drivers.
Mario Sanchez Prada
Comment 22 2015-10-08 05:37:36 PDT
(In reply to comment #21) > (In reply to comment #20) > > > Me neither. For what it's worth, here is the full backtrace of one of those > > crashes, from one of the multiple media tests failing > > (media/video-zoom.html): > > > > http://fpaste.org/275743/42094311 > > > > It seems to crash in something gstreamer related. Yes, but that's for media tests only. If you check the results I uploaded and linked from comment #6, you'll see I'm getting crash in other places too :( > > Just in case it could be drivers related, I moved back to Nouveau (was using > > NVIDIA drivers before), but could not see any difference. > > > > If you are using our internal jhbuild (which you should for running the > layout tests), then the graphics driver you use shouldn't matter as we built > mesa's gallium-llvmpipe software rasterizer and we preload it before running > the tests, so all the tests should be running using mesa's gallium-llvmpipe > driver. Good to know, and one less thing to worry about then. Thanks. > > Calvaris also pointed out that it might be related to VDPAU messing up with > > this, so I checked and found that gstreamer-plugins-bad was being built > > without support for that extension in the internal jhbuild, so I installed > > libvdpau-dev, rebuild gst-plugins-bad and reinstalled it. But no difference > > either. > > > > That could be a possible explanation. > > To discard that is not something related with your drivers, can you try to > run this test after exporting the following environment variable? > > export USE_NATIVE_XDISPLAY=1 > > That will run the tests on your current X session and won't preload mesa > drivers. Tried that, but still got the same funky set of weird crashes, meaning I will have to keep investigating what's wrong with my environment. Thanks anyway for the suggestion, I did not know that envvar either
Mario Sanchez Prada
Comment 23 2015-10-09 02:18:06 PDT
Ping reviewers? (adding Carlos and Gustavo as well, forgot to do it before)
Michael Catanzaro
Comment 24 2015-10-10 11:16:07 PDT
*** Bug 150001 has been marked as a duplicate of this bug. ***
Mario Sanchez Prada
Comment 25 2015-10-12 04:58:36 PDT
Pinging again :), since a new bug (bug 150001) has been filed for the same issue I personally don't see any harm on it since this should be a minor change that does not introduce regressions (as per comment #17) and should make lives easier for people using the version of harfbuzz included in Debian Stable. Only thing to remember is to manually trigger a clean build on the bots after landing, but I will make sure to coordinate with clopez to make that happen once it's landed. Also, about the scary ~200 crashes I was getting while running the tests, I can happily said I found the issue after a couple of hours investigating it today, which in the end was related to a bad combination of having a newer version of cpp (5.2.1 in my machine VS 4.9.2 in the bot2) with a too old version of gst-plugins-bad in the internal jhbuild. But that's a separate issue, which I've already filed a new bug for: https://bugs.webkit.org/show_bug.cgi?id=150026 For now, please consider this patch to harfbuzz so that new Debian Testing users can at least compile WebKitGTK without too much trouble :) Thanks!
Mario Sanchez Prada
Comment 26 2015-10-12 05:13:24 PDT
(In reply to comment #22) > (In reply to comment #21) > > (In reply to comment #20) > > > > > Me neither. For what it's worth, here is the full backtrace of one of those > > > crashes, from one of the multiple media tests failing > > > (media/video-zoom.html): > > > > > > http://fpaste.org/275743/42094311 > > > > > > > It seems to crash in something gstreamer related. > > Yes, but that's for media tests only. If you check the results I uploaded > and linked from comment #6, you'll see I'm getting crash in other places too > :( I misinterpreted the results, sorry. You were right and all those crashes were related to GStreamer, which is great news actually since I fixed that problem now too (see bug 150026) :)
Mario Sanchez Prada
Comment 27 2015-10-12 08:31:12 PDT
Note You need to log in before you can comment on or make changes to this bug.