Summary: | build.webkit.org bots using old-run-webkit-tests should upload results to Chromium's test dashboards | ||
---|---|---|---|
Product: | WebKit | Reporter: | Adam Roben (:aroben) <aroben> |
Component: | Tools / Tests | Assignee: | Nobody <webkit-unassigned> |
Status: | RESOLVED WONTFIX | ||
Severity: | Normal | CC: | ojan, ossy, sam |
Priority: | P2 | Keywords: | InRadar, ToolsHitList |
Version: | 528+ (Nightly build) | ||
Hardware: | All | ||
OS: | All | ||
Bug Depends on: | 32954 | ||
Bug Blocks: |
Description
Adam Roben (:aroben)
2011-03-22 11:42:43 PDT
Sorry if this is information overload. We need to upload 3 JSON files (incremental_results.json, full_results.json and expectations.json) to test-results.appspot.com and then update the html for the dashboard to know about the new bots. The hard part is generating the JSON. Once that's done, I'm happy to modify the HTML appropriately. You're probably best off just working backwards from the actual files at http://test-results.appspot.com/testfile?testtype=layout-tests. Here's a somewhat outdated guide on the JSON format for incremental_results.json and expectations.json: https://sites.google.com/a/chromium.org/dev/developers/design-documents/layout-tests-results-dashboard full_results.json: This is the simplest of the JSON files. This is generated by new-run-webkit-tests. The server does not modify it. a simple mapping from test-name to {"expected":"PASS","time_ms":14,"actual":"PASS"} For non-chromium ports, expected is always "PASS". Actual is dependent on the type of failure. Possible values are TEXT, IMAGE, IMAGE+TEXT, CRASH, TIMEOUT. If you want a somewhat incremental way of approaching this, the treemap dashboard only needs this JSON file. expectations.json: This is generated by new-run-webkit-tests. The server does not modify it. See the dev.chromium.org link above for info on the format. For the sake of non-chromium ports though, this file just consistent of aggregating the data from all the Skipped lists and listing SKIP and the platform as the modifiers for each of those tests. The expectation is always PASS. Tests that are not in a Skipped list don't need to be included in this file. incremental_results.json: This is the most complicated of the JSON files. Here's the python code that generates the incremental_results.json file: http://trac.webkit.org/browser/trunk/Tools/Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py results.json and results-small.json contain data on the history of runs. new-run-webkit-tests only uploads incremental_results.json which is the same format, but with only one run's data. One the server-side, this data is merged into results.json and results-small.json. Also, we have a test server that you can upload trash data to as needed: http://test-results-test.appspot.com/ I'll look into whether it's possible to give more people admin access to the servers. Looks like we already have a bug for this. *** This bug has been marked as a duplicate of bug 32954 *** I guess bug 32954 is really a subtask of this one. It is step 1 in getting results from these bots on the dashboards. *** Bug 59502 has been marked as a duplicate of this bug. *** We're pretty close to having all bots using NRWT now, at which point this won't be needed. I don't think it's worth the effort to do this at this point. |