Currently WebKit has its own version of testharness.js. The modification is related to timeout values. It may just be better to disable testharness timeout in testharnessreport.js This will simplify further updates of testharness.js
Created attachment 265481 [details] Patch
I'm not sure if we can expect that the default timeout will suffice for us. I believe that the tests are designed to work sequentially, but we run them in parallel, which can cause almost 10x slowdown.
(In reply to comment #2) > I'm not sure if we can expect that the default timeout will suffice for us. > I believe that the tests are designed to work sequentially, but we run them > in parallel, which can cause almost 10x slowdown. AFAICT, there are two testharness timeouts, one for each subtest and one for each file. The values modified in testharness.js only apply to the global timeout. explicit_timeout=true disables the global timeout, so this should be fine to reset those value to their default. As of the subtest timeout, by default, none is set. This is done by the author, writing something like async_test("title", {timeout: 1000}). If we want to increase these values compared to what is done currently, we can use timeout_multiplier in testharnessreport.js.
> AFAICT, there are two testharness timeouts, one for each subtest and one for each file. Right, what I'm saying is that per-file timeouts are likely wrong for our environment. > If we want to increase these values compared to what is done currently, we can use timeout_multiplier in testharnessreport.js. That may be OK, especially if that means that per-file timeouts are also increased.
> That may be OK, especially if that means that per-file timeouts are also > increased. timeout_multiplier only applies to per-test timeout. This is ok since this patch disables per-file timeout by setting "explicit_timeout" to true. Doing a quick grep, it seems that a little bit more than 100 tests do set per-test timeouts.
So do we need to keep doing what we are doing, but also set the multiplier in addition?
(In reply to comment #6) > So do we need to keep doing what we are doing, but also set the multiplier > in addition? First step is this patch. It should not change the testing behavior, but is just a cleaner way to integrate testharness.js. Then, we may consider using multiplier, which actually changes the testing behavior. It may be the case that a subtest is timeouting but not the whole test file. Using multiplier may make the whole test file timeouting.
I'm not sure if I follow. I do not object to this patch, just don't understand what we are up to.
(In reply to comment #8) > I'm not sure if I follow. I do not object to this patch, just don't > understand what we are up to. This patch does two things: 1. Remove WebKit custom changes to testharness.js. For future syncing of testharness.js, we will no longer need to port WebKit changes to the new testharness.js. This will also allow bug 150332 which ensures that web-platform-test tests use the expected testharness.js version. 2. Update testharnessreport.js to disable testharness.js per-file timeout that would otherwise kick in before WebKit timeout due to changes done in step 1. Basically, this patch does not change anything except that it uses testharness.js/testharnessreport.js in a proper manner.
Let's consider a test that currently takes 15 seconds, and doesn't have a per-file timeout. It currently passes on Mac, because run-webkit-tests timeout is 30 seconds. What will keep it passing after these changes?
(In reply to comment #10) > Let's consider a test that currently takes 15 seconds, and doesn't have a > per-file timeout. It currently passes on Mac, because run-webkit-tests > timeout is 30 seconds. > > What will keep it passing after these changes? This test will see no change as this test is not testharness.js based. A test with testharness.js currently always has a per-file js-controlled timeout, which is set to a big value. After this change, all testharness.js tests will no longer have per-test js-controlled timeouts.
Comment on attachment 265481 [details] Patch Clearing flags on attachment: 265481 Committed r192461: <http://trac.webkit.org/changeset/192461>
All reviewed patches have been landed. Closing bug.
> A test with testharness.js currently always has a per-file js-controlled timeout, which is set to a big value. In comment 9, you said that opposite - that "per-file timeout that would otherwise kick in before WebKit timeout". I still don't understand this change. It may be correct, but the explanations are self-contradicting.
(In reply to comment #14) > > A test with testharness.js currently always has a per-file js-controlled timeout, which is set to a big value. > > In comment 9, you said that opposite - that "per-file timeout that would > otherwise kick in before WebKit timeout". I am sorry if my explanations are not clear enough, but I do not see any contradiction between c9 and c14. Here is another try. Before this patch, the situation is as follow: - testharness.js sets a per-file JS timeout of 1000 seconds (c14) - WKR sets a WK timeout of 30 seconds in debug mode - testharnessreport.js does not change testharness.js JS timeout Conclusion: - testharness.js JS timeout never kicks in After this patch, the situation can be described as follow: - testharness.js sets a per-file JS timeout of 10 seconds - WKR sets a WK timeout of 30 seconds in debug mode, hence why the JS timeout would kick (c9) in except that... - testharnessreport.js disables testharness.js JS timeout Conclusion: - testharness.js JS timeout never kicks in Is that clearer?
Where does the phrase "per-file timeout" come from? I don't see it in code. What this patch does is change default timeout in testharness.js, not a "per-file timeout".
(In reply to comment #16) > Where does the phrase "per-file timeout" come from? I don't see it in code. Ah, maybe that is where the confusion comes from. This term was only coined as part of this discussion (#c4). To summarize, each testharness.js test is guarded by 3 different timeout mechanisms: 1. WTR timeout (global "per-test-file" timeout) 2. testharness.js Tests.timeout_id (global "per-test-file" timeout, default length defined by harness_timeout) 3. testharness.js Test.timeout_id ("per-test" timeout, disabled by default, can be set for each test/async_test explicitly in the html test file). What this patch does is disabling the second timeout, Tests.timeout_id. > What this patch does is change default timeout in testharness.js, not a > "per-file timeout". It does so because this patch also ensures Tests.timeout_id is always cleared during testharness setup phase.
> 2. testharness.js Tests.timeout_id (global "per-test-file" timeout, default length defined by harness_timeout) > 3. testharness.js Test.timeout_id ("per-test" timeout, disabled by default, can be set for each test/async_test explicitly in the html test file). What is the difference between "per-test-file" and "per-test"?
> What is the difference between "per-test-file" and "per-test"? per-test-file means the execution of a whole HTML file. per-test means the execution of JS code inside a test() or async_test() block. As can be seen in LayputTests/streams/reference-implementation, a single HTML test file may contain several tests. Testharness.js defines Tests and Test to handle that.
https://darobin.github.io/test-harness-tutorial/docs/using-testharness.html contains useful information.