Bug 46503
Summary: | webkitpy.layout_tests.port.mac_unittest.MacTest.test_http_server fails | ||
---|---|---|---|
Product: | WebKit | Reporter: | Eric Seidel (no email) <eric> |
Component: | Tools / Tests | Assignee: | Nobody <webkit-unassigned> |
Status: | NEW | ||
Severity: | Normal | CC: | abarth, dpranke |
Priority: | P2 | ||
Version: | 528+ (Nightly build) | ||
Hardware: | PC | ||
OS: | OS X 10.5 |
Eric Seidel (no email)
webkitpy.layout_tests.port.mac_unittest.MacTest.test_http_server fails
======================================================================
ERROR: test_http_server (webkitpy.layout_tests.port.mac_unittest.MacTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Projects/CommitQueue/WebKitTools/Scripts/webkitpy/layout_tests/port/port_testcase.py", line 56, in test_http_server
port.start_http_server()
File "/Projects/CommitQueue/WebKitTools/Scripts/webkitpy/layout_tests/port/base.py", line 493, in start_http_server
self._http_server.start()
File "/Projects/CommitQueue/WebKitTools/Scripts/webkitpy/layout_tests/port/apache_http_server.py", line 221, in start
raise Exception('Failed to start http server')
Exception: Failed to start http server
----------------------------------------------------------------------
I suspect if the server gets left around, or if trying to run more than one copy of test-webkitpy
This was seen by the commit-queue
http://queues.webkit.org/results/3995130
http://queues.webkit.org/results/4098065
Attachments | ||
---|---|---|
Add attachment proposed patch, testcase, etc. |
Eric Seidel (no email)
Actually my node has hit this failure 6 times this morning. It may be a recent regression?
Dirk Pranke
trying to run multiple concurrent copies of the layout_tests unit tests isn't necessarily supported or expected to work. I take it we're trying to change that?
Dirk Pranke
Can you provide any more details of the environment? Is this only on commit queue bots? Can you repro it locally?
Adam Barth
(In reply to comment #2)
> trying to run multiple concurrent copies of the layout_tests unit tests isn't necessarily supported or expected to work. I take it we're trying to change that?
The unit tests shouldn't have externally visible side effects. That implies that you can run two copies of them concurrently without running into trouble. If that's not the case, we need to remove the side effects.
Eric Seidel (no email)
Yup, it fails locally. One of the commit-queue nodes is my local machine.
Dirk Pranke
(In reply to comment #5)
> Yup, it fails locally. One of the commit-queue nodes is my local machine.
Ah, I understood that, but I wasn't clear if it was running under the CQ or if you could also repro it by hand.
Dirk Pranke
(In reply to comment #4)
> (In reply to comment #2)
> > trying to run multiple concurrent copies of the layout_tests unit tests isn't necessarily supported or expected to work. I take it we're trying to change that?
>
> The unit tests shouldn't have externally visible side effects. That implies that you can run two copies of them concurrently without running into trouble. If that's not the case, we need to remove the side effects.
Hrm. I understand the goal, but I'm not sure if that's achievable all the time in practice. For example, the particular unit test's whole point is to ensure that calling start_http_server() actually starts the http server on the port we expect it to be running on.
Two obvious potential workarounds for this are to dynamically choose a port to hopefully avoid conflicts, and to take out a lock on the port while attempting to run the test, but I can't think of a way to actually test this without it producing any visible effect at all.
Other ideas?
Adam Barth
> Hrm. I understand the goal, but I'm not sure if that's achievable all the time in practice. For example, the particular unit test's whole point is to ensure that calling start_http_server() actually starts the http server on the port we expect it to be running on.
Instead of actually starting the server, you can just verify that you would have created the right subprocess, much like all the commit-queue patches don't actually commit to SVN. They just verify that the right SCM.py methods are called.
Dirk Pranke
(In reply to comment #8)
> > Hrm. I understand the goal, but I'm not sure if that's achievable all the time in practice. For example, the particular unit test's whole point is to ensure that calling start_http_server() actually starts the http server on the port we expect it to be running on.
>
> Instead of actually starting the server, you can just verify that you would have created the right subprocess, much like all the commit-queue patches don't actually commit to SVN. They just verify that the right SCM.py methods are called.
I'm torn on this answer. On the one hand, yeah, I can mock out executive.runcommand() and verify that the correct command line was invoked, and that might achieve some level of correctness. On the other hand, (a) that's fragile -- it's much too much of a white-box test to make me happy -- and (b) it won't actually test that the http_server did in fact start (making this arguably a functional test instead of a unit test, but this is quibbling over definitions).
So, for example, if Eric's getting failures because there is a stale http server lying around, or because he's running two concurrently, we'll gloss over that failure and then weird things will happen down the road when you run the layout tests.
Adam Barth
I feel strongly that unit tests should not have external dependencies. If we want to have some other test suite that covers integration, we can do that, but these tests are meant to be run quickly and reliably.
Adam Barth
Again:
https://bugs.webkit.org/show_bug.cgi?id=46544#c8
Adam Barth
Actually, that one is slightly different, but the same idea.
Eric Seidel (no email)
This is failing on the Mac Leopard buildbot:
http://build.webkit.org/builders/Leopard%20Intel%20Release%20(Tests)/builds/25476/steps/webkitpy-test/logs/stdio