I ran into this limit a few times when taking heap snapshots of large heaps.
In WebKit the Web Inspector frontend page is just a WebPageProxy: void WebInspectorProxy::createFrontendPage() { if (m_inspectorPage) return; m_inspectorPage = platformCreateFrontendPage(); ... trackInspectorPage(m_inspectorPage); ... } However these inspector frontend pages are put into special WebProcessPools meant for only Inspector Pages: WebProcessPool& inspectorProcessPool(unsigned inspectionLevel) { // Having our own process pool removes us from the main process pool and // guarantees no process sharing for our user interface. WebProcessPool*& pool = (inspectionLevel == 1) ? s_mainInspectorProcessPool : s_nestedInspectorProcessPool; if (!pool) { auto configuration = API::ProcessPoolConfiguration::createWithLegacyOptions(); pool = &WebProcessPool::create(configuration.get()).leakRef(); } return *pool; } This makes me think that if we can set a bit on the WebProcessProxy / WebProcess that says "You're a Debugger Process". Then when the first WebPage is created in the process pool it could propagate the property to the pressure handler: MemoryPressureHandler::singleton()->setDebuggerProcess(); This could then raise / affect the limit.
Joe's investigation sounds like a good plan. Is there any reasonable way to test this without screwing up bots? Obviously we can try to take a huge heap snapshot, but considering how slow bots are already compared to dev boxes, I'm not sure a test like this would be stable or useful.
I'm also skeptical of testing that uses huge amounts of memory.
<rdar://problem/67896698>