See FIXME in HTMLPreloadScanner::scan(). Currently we generate tokens during preloading then discard them and retokenize during parsing. We should be able to save them and reuse them if the input steam isn't modified by the script (doc.write).
Eric, I'm thinking of picking this up. Do you have any high-level implementation thoughts?
I'm thinking of making a HTMLTokenSegmentedString class which subclasses the SegmentedString used by the parser and maintains a cache of tokens in the stream. The tokenizer can just return the next cache token if it exists and the htmltokensegmentedstring will drop its cache when anything is inserted into the string.
You're going to have a lot better luck saving AtomicHTMLTokens. They're way smaller in most cases. We mostly just need a way to buffer them and to detect when to discard the buffer because something changed.
I'm not sure a subclass is needed. You're probably better off making an object that composites in the SegmentedString, like we do in HTMLInputStream. Actually, maybe you should just make HTMLInputStream smarter. :)
*** Bug 64369 has been marked as a duplicate of this bug. ***
There is a problem with reusing the tokens from the scanner. The parsing algorithm requires the tree builder to participate in the tokenizing process at a few point like:
(There is a a few more.)
This means we cannot produce the correct token stream without the tree builder. This could be solved by creating a mock tree builder just for guiding the tokenizer but it would make preloading more costly. On the other hand we are running the scanner while we are waiting for the network so maybe it could be worthwile.
If those cases are rare, we could invalidate the token stream.
This is done as part of the threaded parser. See bug 106127. I'm not sure we want to bother trying to do this on the main-thread parser. Although maybe it would make things nice to use CompactHTMLToken even on the main thread. :)
I don't think we plan to do this for the main-thread parser. Closing as a dupe of bug 106127 for now. Feel free to re-open.
*** This bug has been marked as a duplicate of bug 106127 ***