Right now, when using accelerated compositing, WebView renders in its normal fashion (by drawing into its m_backingStoreBitmap), then we send the new bits to CA by wrapping the bitmap in a CGImageRef and calling CACFLayerSetContents. But WebView might modify the bits of m_backingStoreBitmap while CA is still using that CGImageRef, which can result in rendering artifacts (e.g., partial updates, depending on when the bits get copied from the CGImageRef to the GPU). If we instead were to call CACFLayerSetNeedsDisplay whenever the WebView needs to render, and then render into the CGContextRef that CA gives us in the display callback for the root layer, I think we wouldn't be in danger of modifying the bits while CA is using them. Basically we'd switch from rendering into m_backingStoreBitmap to rendering into a CA-provided CGContextRef.
<rdar://problem/7888659>