Cassandra is a decent solution for caching results archives, but it's a pretty terrible solution for storing them permanently. We should us s3 for cold storage of result archives.
<rdar://problem/69092010>
Created attachment 409063 [details] Patch
Created attachment 409085 [details] Patch
Created attachment 409334 [details] Patch
Created attachment 409381 [details] Patch
Comment on attachment 409381 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=409381&action=review rs=me is this tested? > Tools/ChangeLog:24 > + (S3Archiver.save): Save an archive to S3 by it's hash. Nit: it's => its > Tools/Scripts/libraries/resultsdbpy/resultsdbpy/model/s3_archiver.py:59 > + ttl_seconds = ttl_seconds or 60 * 60 * 24 * 365 might be a good idea to store this time limit (1 year) in a separate variable, and use that variable here.
(In reply to Aakash Jain from comment #6) > Comment on attachment 409381 [details] > Patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=409381&action=review > > rs=me > > is this tested? > > > Tools/ChangeLog:24 > > + (S3Archiver.save): Save an archive to S3 by it's hash. > > Nit: it's => its > > > Tools/Scripts/libraries/resultsdbpy/resultsdbpy/model/s3_archiver.py:59 > > + ttl_seconds = ttl_seconds or 60 * 60 * 24 * 365 > > might be a good idea to store this time limit (1 year) in a separate > variable, and use that variable here. Not tested in an automated way, because that would entail mocking S3's API, and I didn't think it was worth the trouble. I did deploy this to our staging instances, though.
Created attachment 409422 [details] Patch
Created attachment 409621 [details] Patch
Created attachment 409702 [details] Patch
Committed r267579: <https://trac.webkit.org/changeset/267579> All reviewed patches have been landed. Closing bug and clearing flags on attachment 409702 [details].