The expectation for all ietestcenter layout tests dealing with Object.freeze and Object.seal is that they should fail. V8 implemented Object.freeze and Object.seal and therefore these tests now pass in chromium. All of those tests have been added to the LayoutTests/platform/chromium/test_expectations.txt file. The reason for not creating new chromium baselines is that the text output contains no additional information other than PASS or FAIL. Therefore, having them as expected to fail in the expectations file has the same value as new baselines. The benefit of having this in the test_expectations file is that once JSC implements Object.freeze and Object.seal we will notice and can remove these lines from the expectations file. If we rebaselined all of these we wouldn't notice and would have a lot of needlessly rebaselined files.
FWIW, it would be appropriate to checkin our current expectations. That would ensure that these tests don't regress between now and when JSC implements these. It would also make test_expectations.txt smaller. :) There is the minor threat that JSC would implement these and have different expected results, but I think that's minor enough that we can ignore it.
Ojan, the reason for not creating new baselines is that the expected output contains basically no information. It has three lines that each contains PASS or FAIL. JSC has FAIL in one of these and we have PASS. If we check in our PASS expectations we will not know when to remove them again and we will have a bunch of needlessly rebaselined files. I don't know what is the biggest pain. If it is easier to have the rebaselined files than to have these lines in the test expectation file we can just check them in. :)
(In reply to comment #2) > I don't know what is the biggest pain. If it is easier to have the rebaselined files than to have these lines in the test expectation file we can just check them in. :) I think it is easier. There are many cases where we've done this already. Having duplicate expectation files is not a huge problem. In either case, it's a problem we should solve differently (i.e. we should have a script that runs once a week and removes duplicates).
Why not check in the correct "passing" results into LayoutTests directory as the expected results for all platforms. Then, add these tests to the Skipped files for any ports that fail to pass the tests. This seems like the more correct way to deal with issues where some ports don't yet implement certain features.
(In reply to comment #4) > Why not check in the correct "passing" results into LayoutTests directory as the expected results for all platforms. Then, add these tests to the Skipped files for any ports that fail to pass the tests. This seems like the more correct way to deal with issues where some ports don't yet implement certain features. This is not what WebKit has done historically. We've checked in failing expectations instead of skipping. It avoids further other regressions. This is a more general problem of what the right thing to do is when a fallback expectations is wrong (e.g. when a test fails on WebKit Mac, but passes on Chrome Mac.). We don't really have a good solution for that case right now other than to check in a Chrome Mac expectation.
JSC support for Object.freeze and seal was added by http://trac.webkit.org/changeset/80378/ and I removed the failing expectations with http://trac.webkit.org/changeset/80687. I leaving this bug open since the same thing is happening for Function.prototype.bind, and there are a bunch of lines in test_expectations.txt that still reference this bug.
Function.prototype.bind support was finally added to JSC with http://trac.webkit.org/changeset/95751, and now all tests referenced by this bug pass (since they have correct baselines).