Make run-webkit-tests run one DumpRenderTree instance per processor. Make this the default, but add an option to disable it or limit the number of instances.
This will require us to fix all the inter-test dependancies. (which is a good thing)
This will likely require using threading in the run-webkit-tests perl script: http://www.mathematik.uni-ulm.de/help/perl5/doc/perlthrtut.html
From an IRC discussion, some of difficulties that needs to be fixed: - several Apache instances; - sharing of icon database, cookies and disk cache; - changing the display profile. Anything else?
(In reply to comment #3) > From an IRC discussion, some of difficulties that needs to be fixed: > - several Apache instances; > - sharing of icon database, cookies and disk cache; > - changing the display profile. > > Anything else? I expect that we would implement this by running multiple instances of DRT. Instead of say multiple threads in one DRT. I assume we're all on the same page about that part. :) One running Apache should be enough to serve for all of these DRT instances, no? Icon DB, Cookies, Cache, etc. need to find some way to not be shared, yes. The changing the display profile should ideally be broken out into a separate tool, which knows how to set and reset the display profile. run-webkit-tests could then launch that tool. (Or just launch DRT with a specific set/reset arg). This was probably all said over IRC... but I've said it here just in case my comments are helpful. :)
My initial thoughts for a design were as follows: - The main test loop starts with doing an openDumpTool(); just prior to that we would - if total number of outstanding tests was greater than threshold then wait for an one to finish - then fork another test (essentially just forking the perl program) - when test is done don't update the $tests{result} hash but rather exit with an error code and have the parent use that error code to update $tests{result}. Pros: - relatively simple Cons: - extra (perl) process per test Other ideas: - perhaps use real threads (rather than processes) to avoid overhead; not sure how the SIGPIPE handler would work then.
An extra perl process per test sounds like a large amount of overhead.
(In reply to comment #6) > An extra perl process per test sounds like a large amount of overhead. > It does... Hm. Looking more closely at the current run-webkit-tests I realize that it attempts to keep the current DRT open by default for 1000 tests. So perhaps the best way of doing it is to keep N instances of DRT open for 1000 tests. That will leave one perl process coordinating everything (which is fine) but there's some logic needed to figure out which pipe was broken (in case of one of the DRTs crashing) and we need some select logic to wait for the output to be available.
(In reply to comment #1) > This will require us to fix all the inter-test dependancies. (which is a good > thing) > Are the some examples of inter-test dependencies that I could check out?
(In reply to comment #7) > Looking more closely at the current run-webkit-tests I realize that it attempts > to keep the current DRT open by default for 1000 tests. So perhaps the best way > of doing it is to keep N instances of DRT open for 1000 tests. That will leave > one perl process coordinating everything (which is fine) but there's some logic > needed to figure out which pipe was broken (in case of one of the DRTs > crashing) and we need some select logic to wait for the output to be available. > It looks like if we simply keep track of all the DRT instances and their file-handles and PIDs then we can use waitpid(-1,WNOHANG) in the SIGPIPE handler to figure out which DRT instance failed. The prologue and epilogue stuff will require some serialization -- but right now it looks like it's only windows that use them; and it'd be a great start if just MacOSX and linux could use parallel DRTs. I'll start putting something together... - Jacob
(In reply to comment #8) > Are the some examples of inter-test dependencies that I could check out? You can try running the tests with --singly and/or --reverse options, which results in a bunch of failures. I am not aware of any interdependencies that are not caught by this.
(In reply to comment #10) > (In reply to comment #8) > > Are the some examples of inter-test dependencies that I could check out? > > You can try running the tests with --singly and/or --reverse options, which > results in a bunch of failures. I am not aware of any interdependencies that > are not caught by this. > Or --random, if you are very adventurous.
*** Bug 24460 has been marked as a duplicate of this bug. ***
Created attachment 28811 [details] Just my work in progress on this bug. This attachment isn't ready for review. It has several issues that I need to fix but I think it is headed in a nice direction and gives a feel for my current direction on this.
Created attachment 29339 [details] New work in progress.
Unassigning from myself as I won't have time to work on this in the foreseeable future.
We might want to consider marking this as WONTFIX and just switching to new-run-webkit-tests, since that does support running in parallel.
Yup. We're very close to having new-run-webkit-tests ready for prime-time. Closing this as WONTFIX.