Bug 222553 - Optimize float based sRGB colors that can be round tripped to uint8_t based representations
Summary: Optimize float based sRGB colors that can be round tripped to uint8_t based r...
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: New Bugs (show other bugs)
Version: WebKit Nightly Build
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Sam Weinig
URL:
Keywords: InRadar
Depends on:
Blocks:
 
Reported: 2021-03-01 08:38 PST by Sam Weinig
Modified: 2021-03-21 10:43 PDT (History)
7 users (show)

See Also:


Attachments
Patch (4.26 KB, patch)
2021-03-01 08:40 PST, Sam Weinig
no flags Details | Formatted Diff | Diff
Patch (12.83 KB, patch)
2021-03-03 19:06 PST, Sam Weinig
no flags Details | Formatted Diff | Diff
Patch (20.24 KB, patch)
2021-03-09 12:18 PST, Sam Weinig
no flags Details | Formatted Diff | Diff
Patch (49.18 KB, patch)
2021-03-10 09:59 PST, Sam Weinig
sam: review?
ews-feeder: commit-queue-
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Sam Weinig 2021-03-01 08:38:30 PST
Optimize float based sRGB colors that can be round tripped to uint8_t based representations
Comment 1 Sam Weinig 2021-03-01 08:40:26 PST
Created attachment 421828 [details]
Patch
Comment 2 Sam Weinig 2021-03-03 19:06:10 PST
Created attachment 422171 [details]
Patch
Comment 3 Radar WebKit Bug Importer 2021-03-08 08:39:12 PST
<rdar://problem/75171470>
Comment 4 Sam Weinig 2021-03-08 15:30:35 PST
Darin,

I'm interested in your feedback on this change.

For the most part, it seems good, but it has one weird side effect I don't like. If you create a color like:

    auto color = Color { SRGBA<float> { ... } };

that can be losslessly converted to SRGBA<uint8_t> internally, than calls like:

    auto otherColor = color.colorWithAlphaMultipliedBy(0.5);

potentially have different results, because now the multiplication is taking place on the SRGBA<uint8_t>, and our existing behavior for that function is to keep the color's underlying type the same.

One way to avoid this might be to always convert to SRGBA<float> before doing one of these mutation-creations, and if the new color happens to fit in SRGBA<uint8_t> than it gets to, but otherwise, it stays unconverted to SRGBA<float>.

This effects the following member functions:
- colorWithAlpha
- colorWithAlphaMultipliedBy
- invertedColorWithAlpha

Fortunately, the model I describe above would leave SRGBA<uint8_t> as SRGBA<uint8_t> when used in common cases like c = c.invertedColorWithAlpha(1.0), as it would round trip cleanly.

What do you think?
Comment 5 Darin Adler 2021-03-08 18:24:12 PST
(In reply to Sam Weinig from comment #4)
> What do you think?

I think that it’s OK that these operations can be a bit costly and if someone wants to convert it back to SRGB<uint8_t> for some reason, they can do that explicitly.

I wonder if there’s any practical case that was efficient before and now will allocate and deallocate memory. Is there a good way for use to test for this?
Comment 6 Darin Adler 2021-03-09 09:11:14 PST
Seems like the risks are:

1) Extra memory use and slower performance managing colors in cases that before would round channels to 8-bit integers.

2) Slower code path and slower performance rendering in graphics libraries (OK, I admit it, only really thinking about Core Graphics) in cases that before would round channels to 8-bit integers.

3) Serialization where rounding to 8-bit integers used to be implicit, and is valuable for getting rid of unwanted extra precision, but would need to be done explicitly now.

I doubt that any of those risks are great, but seems that is where we should be alert for effects.
Comment 7 Darin Adler 2021-03-09 09:15:04 PST
Comment on attachment 422171 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=422171&action=review

I like this change and think we should do it.

> Source/WebCore/platform/graphics/Color.h:225
> +    void setColor(const SRGBA<float>& color, OptionSet<FlagsIncludingPrivate> flags = { });

Omit the name flags here.

> Source/WebCore/platform/graphics/Color.h:227
> +    void setColor(const ColorType& color, OptionSet<FlagsIncludingPrivate> flags = { });

Omit the name flags here.
Comment 8 Sam Weinig 2021-03-09 12:16:57 PST
(In reply to Darin Adler from comment #6)
> Seems like the risks are:
> 
> 1) Extra memory use and slower performance managing colors in cases that
> before would round channels to 8-bit integers.

Most, if not all, of the cases seem to be confined to temporary colors, created to be passed to a GraphicsContext function. 

> 
> 2) Slower code path and slower performance rendering in graphics libraries
> (OK, I admit it, only really thinking about Core Graphics) in cases that
> before would round channels to 8-bit integers.

This really comes down to the behavior of cachedCGColor()/leakCGColor(). For the latter, we always convert to at least SRGBA<float> already before creating a CGColorRef. For the former, there is a small possibility there could be some cache misses, but only in cases where we were already extraordinarily lucky.

> 
> 3) Serialization where rounding to 8-bit integers used to be implicit, and
> is valuable for getting rid of unwanted extra precision, but would need to
> be done explicitly now.

I looked, and couldn't find any cases of this. 

Thanks for the detailed analysis.
Comment 9 Sam Weinig 2021-03-09 12:18:03 PST
Created attachment 422742 [details]
Patch
Comment 10 Sam Weinig 2021-03-09 15:50:26 PST
Hm, interesting. Changing to always convert to SRGBA<float> in those mutation functions does cause some serialization change. Fun.
Comment 11 Sam Weinig 2021-03-09 15:57:55 PST
(In reply to Sam Weinig from comment #10)
> Hm, interesting. Changing to always convert to SRGBA<float> in those
> mutation functions does cause some serialization change. Fun.

But I know how to fix this.
Comment 12 Sam Weinig 2021-03-09 19:49:14 PST
Actually, on second thought, I think these new results are fine. I am just going to update the results.
Comment 13 Darin Adler 2021-03-10 09:54:53 PST
(In reply to Sam Weinig from comment #12)
> Actually, on second thought, I think these new results are fine. I am just
> going to update the results.

I’m not sure I understand your decision and would like to know more; I can wait for the patch, I suppose.

If the same color can be serialized multiple different ways without loss, what determines the choice? If it’s a policy in the code doing the serialization, that’s great and fine with me. If it’s a side effect of how the color was computed, please don’t sign me up.
Comment 14 Sam Weinig 2021-03-10 09:59:21 PST
Created attachment 422841 [details]
Patch
Comment 15 Sam Weinig 2021-03-10 10:55:26 PST
(In reply to Darin Adler from comment #13)
> (In reply to Sam Weinig from comment #12)
> > Actually, on second thought, I think these new results are fine. I am just
> > going to update the results.
> 
> I’m not sure I understand your decision and would like to know more; I can
> wait for the patch, I suppose.

I am not sure I do either, and I actually got cold feet and split the difference, making the actually web exposed bits retain their old serialization.

> 
> If the same color can be serialized multiple different ways without loss,
> what determines the choice? If it’s a policy in the code doing the
> serialization, that’s great and fine with me. If it’s a side effect of how
> the color was computed, please don’t sign me up.

Ok, so here is the story (I am going to be a bit verbose partially to re-explain it to myself), because it is a bit of both. This is all only for sRGB colors, all other colorspaces behave a bit more sanely (though lab() and color(lab ...) are also odd).

The CSS Color 4 spec, and our code historically, differentiate between colors parsed in the legacy color syntaxes (e.g. #FF00AA, rgba(127, 255, 0, 0.5), hsl(...), etc.) 
and "modern" color syntax (e.g. color(srgb 1 0 0 / 0.5)). 

The difference is used in two ways:
1. bit-per-component count 
   - Legacy sRGB: 8 bits
   - Modern: 32 bits
2. serialization
   - Legacy sRGB: https://drafts.csswg.org/css-color-4/#serializing-sRGB-values
   - Modern: https://drafts.csswg.org/css-color-4/#serializing-color-function-values

We encode this difference in Color via the UseColorFunctionSerialization, and then in serialization, if an sRGB color has that bit set, we serialize using "color(srgb ...)". 

But, that isn't the full story, because we actually always serialize as "color(srgb ...)" if the underlying type is SRGBA<float> (meaning we only check the bit when the underlying type is SRGBA<uint8_t>). The rationale behind this is that if we serialize as a SRGBA<float> in the legacy form, it would not correctly round trip if parsed again, since it would be clamped to SRGBA<uint8_t> on parsing.

This system works fine, if we only ever store and use colors as they were parsed. But if we ever need to modify a color and then serialize it out again, we run into potential problems.

The one place we seem to do that, though there may be more we are not testing as well, is in the <canvas> APIs to set a stroke/fill/shadow color using a string, and optional alpha override.

So, if you have:

 ctx.setFillColor('red', 0.5)

we parse "red" as legacy sRGB color into 'color' (stored as SRGBA<uint8_t>), and if we were to then call:

  auto newColor = color.colorWithAlpha(0.5);

'newColor' would now be a SRGBA<float> (since 0.5 does not round trip to cleanly) and would serialize as 'color(srgb 1 0 0 / 0.5)'.

Maybe that's ok, but since canvas is odd enough, I opted to special case this, and have the canvas code check if it was a legacy sRGB color it just parsed, and then eagerly convert the float alpha to byte alpha to ensure the representation stayed as bytes.
Comment 16 Darin Adler 2021-03-10 10:59:00 PST
(In reply to Sam Weinig from comment #15)
> (since 0.5 does not round trip cleanly)

There’s what I missed. Serialization is subtly different from storage for srbga() specifically. This:

    srgba(255, 0, 0, 0.5)

Looks like it exactly matches:

    colors(srgb, 1, 0, 0 / 0.5)

But somehow that 0.5 in the first case is secretly actually something more like 0.50196.
Comment 17 Sam Weinig 2021-03-10 11:02:55 PST
(In reply to Darin Adler from comment #16)
> (In reply to Sam Weinig from comment #15)
> > (since 0.5 does not round trip cleanly)
> 
> There’s what I missed. Serialization is subtly different from storage for
> srbga() specifically. This:
> 
>     srgba(255, 0, 0, 0.5)
> 
> Looks like it exactly matches:
> 
>     colors(srgb, 1, 0, 0 / 0.5)
> 
> But somehow that 0.5 in the first case is secretly actually something more
> like 0.50196.

Exactly. rgba(255, 0, 0, 0.5), as a "legacy" sRGB color, always uses the SRGBA<uint8_t> form, secretly losing precision.

Note, this is not a requirement of the spec. The spec only mandates a minimum precision of 8 bits, we could totally store rgba(255, 0, 0, 0.5) as SRGBA<float>, but so far think doing so is probably not worth it.
Comment 18 Sam Weinig 2021-03-11 09:15:48 PST
Darin, mind reviewing this latest version? (I will have to update some platform specific render tree results before landing, but the code is good to review now).
Comment 19 Darin Adler 2021-03-11 09:41:00 PST
Sure, I’ll review. I was slow to do so because I am ambivalent about the workaround explicit serialization format checks.
Comment 20 Sam Weinig 2021-03-21 10:43:24 PDT
(In reply to Darin Adler from comment #19)
> Sure, I’ll review. I was slow to do so because I am ambivalent about the
> workaround explicit serialization format checks.

Any thoughts on how to avoid it?