You need to
before you can comment on or make changes to this bug.
There are a number of things WebCore could do differently to reduce the user's exposure to remote tracking mechanisms but which it is not necessarily desirable to implement as default behaviour. At first glance, some of these appear a good fit with 'Private Browsing' but really they're addressing a different problem. Private browsing's overriding objective is to ensure no trace of a browsing session is left on your disk, fingerprinting a user's browser and tracking that browser's visits presents a different set of challenges.
Here are some:
- Fuzzing the JS Date object to prevent fingerprinting based on 'typing cadence'.
- The window.screen object is quite revealing.
- Enumerating fonts through calls to a flash plugin can provide quite distinctive results
- Enumerating plugins likewise
- Uniform user-agent headers and navigator object values.
- Third-party cookie handling needs to be stricter if you're in tracking-resistance mode
Tracking-resistance also requires a different type of 'state separation' than that entailed by private browsing. For example, you might not want to share SSL session ids from a tracking-resistant session with a normal session. That isn't really an issue with private browsing which is all about locally detectable traces rather than remotely detectable ones. Obviously this particular example isn't something WebKit could do much about but it's the kind of problem this option would encounter.
There's a lot more besides and I intend to raise individual bugs for each under this master bug.
This sort of tracking-resistance fits with at least two use cases:
- Tor users who want the properties of their browser in 'anonymous' mode to be indisinguishable from other users of the same browser in 'anonymous' mode.
- Users of 'private browsing' who want an extra level of privacy from remote as well as local sources.
This mode, in and by itself, does not intend to neutralize tracking of the user's browser. That's because there are a lot of things the client will need to implement itself to ensure success. The purpose of the mode is to take care of things that WebCore/WebKit can do relatively easily and which, otherwise, each client would be left attempting to enforce indivudally through JS hooking and other messiness (with the attendant risk of fragmenting users into smaller groups than necessary through different implementation choices across WebKit clients). So this isn't proposed as a fingerprinting kill-button. The idea is to identify and implement a set of behaviours that are under WebCore's control and make individual WebKit-based browsers indistinguishable from each other when in this 'tracking resistant' mode.
Some relevant background stuff:
I'll add more links here as I come across them.
I'm skeptical that doing these things will provide anything more than window dressing, but I certainly don't want to discourage you from trying. See also <https://wiki.mozilla.org/Fingerprinting>.
A note on the approach I hope to take for this:
Open a child bug to make the tracking-resistance option available in WebCore and once that's in start adding features that use it under individual bugs.
The process of adding in features that use the feature is likely to be a lengthy process so under this approach the option would potentially be available to ports while still not doing much.
If it's agreed that this option is something WebKit wants then will need some guidance on whether this is acceptable or not.
(In reply to comment #2)
> I'm skeptical that doing these things will provide anything more than window dressing, but I certainly don't want to discourage you from trying. See also <https://wiki.mozilla.org/Fingerprinting>.
Well strictly speaking they are a sort of window-dressing - in that a WebKit client can pretty easily render them useless through one mistake or another. So the option isn't so much KillTrackingDeadWithThis as stuff that WebKit can do to make the life of careful, fingerprint-conscious client implementations easier.
I'm very conscious that this can't really progress if it is not considered a good idea to introduce of a UniformFingerpint type option to the WebCore API and start adding stuff to it. I'm also aware that a reviewer would be worried about allowing such an option with nothing to put under it yet.
So what do you think would be the best way of moving it along? What would you need to see from me before accepting patches for this option? I was thinking a patch introducing the option and perhaps at least two or three adding functionality that uses it.
Or are there other issues that need to be thrashed out first?
One approach would be to write up a design document that explains what you'd like to do and how you think it will mitigate tracking. We can then discuss the various proposals and their trade-offs. Maybe start a wiki page (similar to Mozilla's page)?
To be clear, I'd like to see us make tracking harder. I'm even ok with window dressing if it makes folks happy. I'd just like us to understand what we're buying and what we're paying for it.
> Here are some:
> - Fuzzing the JS Date object to prevent fingerprinting based on 'typing cadence'.
> - The window.screen object is quite revealing.
> - Enumerating fonts through calls to a flash plugin can provide quite distinctive results
> - Enumerating plugins likewise
> - Uniform user-agent headers and navigator object values.
> - Third-party cookie handling needs to be stricter if you're in tracking-resistance mode
What are your approaches for solving/mitigating these issues? Sorting fonts and plugin lists is a possibility, but that only reduces entropy, so they are still useful for fingerprinting. Even if you remove the font list, an attacker could still have the browser render strings in popular fonts and test for the size of the corresponding elements in order to test whether the font is installed.
(In reply to comment #6)
> > Here are some:
> > - Fuzzing the JS Date object to prevent fingerprinting based on 'typing cadence'.
> > http://arstechnica.com/tech-policy/news/2010/02/firm-uses-typing-cadence-to-finger-unauthorized-users.ars
> > - The window.screen object is quite revealing.
> > (https://bugzilla.mozilla.org/show_bug.cgi?id=418986)
> > - Enumerating fonts through calls to a flash plugin can provide quite distinctive results
> > - Enumerating plugins likewise
> > - Uniform user-agent headers and navigator object values.
> > - Third-party cookie handling needs to be stricter if you're in tracking-resistance mode
> What are your approaches for solving/mitigating these issues?
The plan is to write up a wiki page elaborating each in detail. I hope to have the first draft up within a fortnight. Adam's link ( https://wiki.mozilla.org/Fingerprinting ) fleshes out quite a few of the implementation questions from a Firefox point of view.
My own thinking at the moment is that given the following considerations:
- Most of the problems cannot be solved without some measure of usability impact, great or small
- Some problems are outside WebKit's control and require client implementation
The first thing the wiki page will have to do is define what it can and cannot solve and decide whether the notion of a uniform fingerprint mode is still feasible or useful.
There may be a group of measures, like David Hyatt's fix to CSS history leaking, that can be implemented as default behaviour with little or no user impact.
There will be some measures like managing the Date() object or the plugins list that could generally be OK for day-to-day use but may adversely affect specialized use cases on some sites. This implies a performance trade-off and the page will need to define what, if any, trade off is acceptable.
There will be another group, such as managing the values returned by the navigator object, that can be partially solved by WebKit but which will rely on clients not undoing the work by customizing the user agent header for example.
Another group will contain measures that are not easily soluble and need to be fixed before the mode can make any claim to be helpful at all, such as the fact that plugins have largely unfettered access to the disk.
So even at first glance it's apparent that most of the measures required would not be suitable for 'normal' browsing. Instead they are implementing behaviour that will be useful to browsers who want to implement an enhanced private or anonymous browsing mode and for whom it is not trivial, and even counterproductive, if they try to implement it themselves.
Unlike Firefox, a WebKit implementation cannot solve the problem in its own right. It can only solve the parts it shields from clients. So a big part of it will be defining the sort of things clients need to do themselves and the way to do it best in order to interoperate with what WebKit implements.
So to summarize: my approach at this point consists mostly of hot air.
> Sorting fonts and plugin lists is a possibility, but that only reduces entropy, so they are still useful for fingerprinting. Even if you remove the font list, an attacker could still have the browser render strings in popular fonts and test for the size of the corresponding elements in order to test whether the font is installed.
Sounds like meat for the wiki page!
You wouldn't remove the font or plugin list but decide on a list for the mode and stick to it. This means all WebKit clients would present the same list when in the mode and that would help satisfy the requirement to make all WebKit browsers indistinguishable from each other at least as fonts and plugins are concerned. There's no reason the list in either case has to be restrictive, and it does not have to conceal the fact that the client is fingerprinting-resistance mode. It just has to be large enough to support non-exotic browsing requirements and stable for long periods of time.
The brute force approach applies to plugins as well as fonts and is something that the mode would have to protect against by actually only supporting the fonts/plugins it says it does. This is a usability penalty.
Since plugins can retrieve the font list through actually walking the directory tree on disk, the major challenge here would seem to be intercepting these calls so that they can be safely manipulated. I don't know enough about the current method of sandboxing plugins in WebKit, if there is one, to allow something like this. Is this something that Chromium already has the potential to do in its own right?
There are a lot of intersting links here:
Going back to an assumption I stated in the original bug report here, 'Private Browsing' actually doesn't have a formal definition in WebKit - at least not in the code. It seems to me that the narrowest available definition that is common to all WebKit-based implementations of private browsing is to ensure that a private browsing session leaves no traces of the acitivity from that session on disk or in memory.
There is more to private browsing from a client implementation point of view, such as ensuring state from a private browsing is not available from a normal session. But that's very much outside WebKit's remit.
Given that a lot of the discussion about private browsing in research papers pays attention to the fact that remote intrusions on privacy are possible in private browsing mode I think there may be some merit in defining the scope of Settings::PrivateBrowsing explicitly so that any measures WebKit can take to protect against remote stuff can be addressed under a separate option.
Here's a study of private browsing modes in web browsers that defines some of the objectives:
I don't think this work falls under the umbrella of private browsing mode, at least not as it is understood traditionally.
(In reply to comment #10)
> Here's a study of private browsing modes in web browsers that defines some of the objectives:
> I don't think this work falls under the umbrella of private browsing mode, at least not as it is understood traditionally.
I agree with you, but that paper does regard a 'web attacker' (section 2.2) as one of the models private browsing attempts to defend against. It adduces as evidence of this the fact that most private browsing modes enforce a degree of state separation between private browsing and non-private browsing modes and also between private browsing sessions. It identifies fingerprinting techniques as one of the ways in which this goal can be undermined.
I think the properties it identifies above are by-products of the way private browsing implements the requirement to keep the state of previous browsing sessions locally undetectable more than anything else. It's easier for example to keep cookies from a private browsing session in a container that will be later deleted. So cookies from normal browsing aren't available in private mode. This also prevents normal browsing cookies from getting updated in private browsing mode, inadvertently leaving a local trace of the private browsing session.
So there is certainly room for clients and also WebKit to be more clear about the scope of private browsing. It's hard to see it as anything but ensuring the session leaves no traces on disk or display.
I've added a wiki page: https://trac.webkit.org/wiki/Fingerprinting