I recently noticed a number of float casts in the transformation classes. These are likely vestiges from the time with TransformationMatrix.cpp used all float values internally. This bug cleans up these unnecessary casts to help clarify the meaning of the code. It may provide a slight performance benefit, as the cast operations on variables requires a couple of additional instructions to convert the data types. Consider the following two cases, one with the cast in place and the other without: mat.m_matrix[1][1] = (float)cosTheta; 5D409F05 fld qword ptr [cosTheta] 5D409F08 fstp dword ptr [ebp-0B0h] 5D409F0E fld dword ptr [ebp-0B0h] 5D409F14 fstp qword ptr [ebp-78h] mat.m_matrix[1][2] = sinTheta; 5D409F17 fld qword ptr [sinTheta] 5D409F1A fstp qword ptr [ebp-70h] The double-to-double store is done in two instructions, rather than the four required for the casting case.
Created attachment 178235 [details] Patch
Comment on attachment 178235 [details] Patch Clearing flags on attachment: 178235 Committed r136964: <http://trac.webkit.org/changeset/136964>
All reviewed patches have been landed. Closing bug.
This changes the precision of some blending results more than our current test allows (although maybe they're too precise). Example: The difference between (1.0823489449280947471976333) and (to).m11() is 2.3295135154199897e-08, which exceeds (1e-14) for a test that blends skewX(45) for values (0, 0.25, 0.5, 1). Is the extra imprecision desirable here?
Brent points out that the difference in output is more plausibly the result of precalculated values that depended on the double->float truncation.
(In reply to comment #4) > This changes the precision of some blending results more than our current test allows (although maybe they're too precise). Example: > > The difference between (1.0823489449280947471976333) and (to).m11() is 2.3295135154199897e-08, which exceeds (1e-14) > > for a test that blends skewX(45) for values (0, 0.25, 0.5, 1). Is the extra imprecision desirable here? The patch simply removed a couple of cases where values were being cast to float before being assigned to a double. It's hard to see how removing that operation would cause a loss of precision; it seems more likely that we were basing our existing results on truncated values caused by the conversion of double->float->double. By removing that undesired drop in precision we should be removing some imprecision.