Add tests to dump textual representations of entire HTML and SVG JS DOM trees This is what I originally wanted to do when I wrote window-properties. I've now generalized the technique a bit more, broken most of the javascript code out into print-properties.js and added two new files: html-properties.html and svg-properties.xhtml These two new files walk and print all properties accessible via the HTML and SVG DOM interfaces.
Created attachment 10626 [details] DRT seems to not be dumping fully, but this at least includes the full tests
Created attachment 10627 [details] A complete version of the tests (with full results)
One warning (which might actually prevent these from landing) is that these tests are quite slow to run. The html-names test takes almost 40 seconds iirc.
We could/should probably change this to sort all the properties before printing, as well as print out the default values of most properties.
22.05 secs: fast/js/html-properties.html 9.47 secs: fast/js/svg-properties.xhtml
(In reply to comment #5) > 22.05 secs: fast/js/html-properties.html > 9.47 secs: fast/js/svg-properties.xhtml Assuming the results are from an MBP, that would translate into minutes on my development machine (G4), I guess.
(In reply to comment #6) > (In reply to comment #5) > > 22.05 secs: fast/js/html-properties.html > > 9.47 secs: fast/js/svg-properties.xhtml > > Assuming the results are from an MBP, that would translate into minutes on my > development machine (G4), I guess. Perhaps. As I mentioned to andersca, you can't really think of this as just two tests though. Each of these tests is really the equivalent of at least 100-200 separate tests. We could break them out into separate files 1-per class, but a. that would be much slower, and b. would make maintenance a PITA. I've actually augmented the tests on my machine to be even more useful (patch forthcoming) and print default values from each of these dom elements. I think that the value of these test far, far out-weighs their execution time cost. (Heck, they also serve as a performance test for our JavaScript engine. :)
Created attachment 10704 [details] Better version of the tests (actually includes results too) I've now removed window-properties.html, realizing that it was redundant (its content is covered by the svg-properties and html-properites tests already). I've also added printing of "default" values. (Since each of these dom elements is made new, all of its properties should hold the proper default values.) These tests run with the following timing on my machine: 27.17 secs: fast/js/html-properties.html 17.50 secs: fast/js/svg-properties.xhtml HTML unfortunately duplicates many, many properties on the elements themselves, instead of using prototypes, thus causing redundant printing.
Created attachment 10705 [details] Just the new test files (much easier to read)
I'm adding ggaren as a CC on this bug, as these tests could be very useful for comparing DOM implementations between Safari and other browsers.
(In reply to comment #5) > 22.05 secs: fast/js/html-properties.html > 9.47 secs: fast/js/svg-properties.xhtml Has anyone profiled these tests with Shark?
Comment on attachment 10704 [details] Better version of the tests (actually includes results too) My main comment is that having a test like this is great, but I don't know if it should be part of the regular layout tests. Unlike the table tests, it only tells us if we've completely broken access to a certain property, not if the property works or not. Since it tells us what the DOM is, rather than whether the DOM works, this test seems more like a tool we should use periodically than a layout test. Comments about the code: You probably know this, but the test still doesn't cover things like events, collections, selection, and parts of the CSS dom. This output is a little confusing: +a.__proto__(HTMLAnchorElement).__proto__(HTMLElement) : object (HTMLElement) Is 'a' the HTMLAnchorElement, or a.__proto__? What does the colon mean? How does "(HTMLElement)" differ from "object (HTMLElement)?" Can't you replace isPropertyDefinedOnPrototype was just using the "in" operator on the prototype or testing for the existence of proptotype[property]? If you're trying to optimize by skipping to the Object prototype first, (a) I'd be surprised to learn that doing so was more efficient than a lookup inside the engine; (b) why not just test the Object prototype first, and then use the prototype chain? PrintProperties shoud be "printProperties," since it's not a constructor. Also, it's odd to have a verb for an object's name. "propertyPrinter" is a noun alternative. r=me I think this test belongs in WebKitTools.
They could be useful as layout tests to detect cases where we've accidentally changed the DOM, but sounds like that benefit might not be worth the time in normal layout test runs. On the other hand, it might be just fine for the buildbot. It would sort of be an unintended DOM API change detector.
Are we going to land this change or not? It's been sitting in the commit queue for *nine months*.
Judging from experience with other "dump everything" tests (e.g. window-properties.html), they turn into maintenance nightmare, and don't catch real bugs. So, I suggest to WONTFIX this.
(In reply to comment #15) > Judging from experience with other "dump everything" tests (e.g. > window-properties.html), they turn into maintenance nightmare, and don't catch > real bugs. So, I suggest to WONTFIX this. I don't see window-properties.html so much as a maintenance nightmare as I do the "canary in the coal mine" that tells you when a property changed unexpectedly. What bugs will these new tests catch that the window-properties.html did not?
Well, window-properties.html is broken on the buildbot quite often, and never for a good reason. Besides, it doesn't work well with features that can be turned on and off.
Another way to solve this sort of "canary" test is to mark them as auto-updating. Such that the build bot auto-updates the results if they ever change. At least then it's easy to see the current state of things. Or better yet, we just make the bots auto-update and commit any results which don't exists. Then, to fix this particular test (or any test) when failing, you just svn rm the results). That still doesn't address the "fails for people who compile with different sets of flags" but I still like this kinda of canary test.