Content blockers have been a great addition to WebKit-based browsers like Safari. They prevent abuse by ad networks and many people are realizing the benefits of that with increased performance and better battery life.
But there's a downside to this content blocking: it's hurting many smaller sites that rely on advertising to keep the lights on. More and more of these sites are pleading to disable content blockers, this is just one example:
In effect, these smaller sites are collateral damage in a larger battle. And that's a big problem for the long-term health of independent content on the web.
I think it's time we start looking at the problem differently. It's resource abuse that's the root cause, so why aren't there limits on those resources?
Great code happens when developers are given resource constraints: look at what folks did with the original iPhone and its 128 MB of memory and 400 Mhz CPU. Or even further back with 128KB of RAM in the original Mac or 640KB in DOS. Lack of computing resources inspires creativity.
The situation I'm envisioning is that a site can show me any advertising they want as long as they keep the overall size under a fixed amount, say one megabyte per page. If they work hard to make their site efficient, I'm happy to provide my eyeballs.
If these limits are deployed in WebKit on iOS, there will also be an immediate incentive for web developers to be more efficient with their content. A wonderful side-effect is that other platforms will benefit from that incentive.
I realize that due to the asynchronous nature of the web, these limits are harder than they sound. But that should not prevent this idea from becoming a goal.
A simple dialog would put the user in control of the situation:
"The site example.com uses 5 MB of scripting. Allow it?"
It's really hard to impose CPU limits unless you run JS in a separate thread or process, to which you can assign priority. Doing so is something we've thought about, but it's a large engineering effort.
It's possible we could track memory use on a per-document basis, but there are all sorts of code paths that can trigger memory use (often by code not controlled by webkit), and again it's hard to track all that.
WebKit currently does have some resource limits (at least on Apple platforms):
- Maximum memory limit per tab, and overall memory limit for all tabs. The memory limit is very high, though, because the counter-measure for violating it is your webpage gets killed, and the memory use requirements of many popular websites are very high. (Think top 10 rich web apps.) It is also much higher on macOS than iOS, and higher for foreground tabs than background.
- Maximum background CPU limit per tab (cap on Mac based on draining no more than ~1% of the battery, practically no ability to run in the background on iOS).
Things we currently do not have:
- CPU usage limits on the foreground tab
- Per-frame resource limits (so that ad frames could have a separate resource quota from the main content)
This bug report got a bit of attention on Twitter thanks to Eric Meyer. One of the good things to come from that is learning that Alex Russell is prototyping this idea in Chromium: https://chromium-review.googlesource.com/c/chromium/src/+/1265506
Forgot to include the Twitter thread: https://twitter.com/ppk/status/1090696014124716033
Out of the various possible resource limits, I believe Alex's branch implements a limit on the input size of resources, but not runtime memory or CPU usage. Probably indirectly helps with bandwidth and load time, despite not directly targeting it.
As other prior art, Firefox at one point planned to block resources that are known trackers and which take more than a given amount of time to load. This is another possibility for resource limits.
My branch doesn't limit memory, but it applies a break on main-thread CPU in the form of a long-task limit. Currently, if you enable NSM, the entire page gets paused (no further tasks run until next input) if an individual main-thread script takes more than 200ms. This number is a total guess! It could be too high, or too low, and it doesn't do anything to limit recurring timers (a worry I don't have a reasonable approach for yet). It also doesn't apply any limits to script (or resources) in workers. The primary goal is to keep the main thread clean and create an opportunity to flag "this page is slow" UI of some sort (totally TBD).
I'm also unsure how to limit memory in an effective way without totally kneecapping games and implicating the graphics stack in hard-to-reason-about ways.
One of the unstated goals of NSM is to create a uniform policy, akin to TLS lock icon badging, that developers can easily understand compliance with at development time.
Would very much like to discuss tradeoffs if folks are interested!