A browser's canvas implementation, text antialiasing, image compression, and several other features can be used to gain additional bits of information to identify a browser user across sites. Some of these factors can likely be avoided to help reduce the uniqueness of these "fingerprints". There are a few questions that should be discussed: 1. What relevant factors exist? 2. Which can be eliminated? 3. Would eliminating these factors introduce performance or quality regressions, or other new issues? 4. Could these regressions be worth it for the additional privacy? Discuss.
Closing this as invalid, the paper that describes this states ~5 bits of accuracy which is more or less meaningless in terms of anything approaching unique user tracking. But that is ignores that bigger problem in paper of all of their states being based of a sample size of 294 systems. 294 systems is not enough to be even remotely representative of the real world. They would need orders of magnitude more samples for it to be meaningful - like at minimum a few 100k systems.