We already discard all JIT code when coming under memory pressure. We should also stop generating new code until the pressure comes back down, to avoid sawtoothing.
Created attachment 303191 [details]
Comment on attachment 303191 [details]
Let's only do this for non-visible tabs as a first cut.
Let's try doing this for processes that are inactive (haven't been foreground active for an hour and are not playing sound)
Created attachment 306440 [details]
Comment on attachment 306440 [details]
I'm not sure any of this is a good idea.
What if the background tab starts doing computation? The llint can be 10,000x slower in extreme cases. That's a lot of wasted power. It could even be a spin when you go back to that tab.
I don't think we want this.
What if instead, background tab meant that the thresholds for hiring are scaled higher? Could be 3x higher or something. That would ensure that in my scenario, the hot code will tier up and not waste as much power.
But even this feels somehow wrong. The JIT thresholds are the way that they are because we think that they will make the code run faster and use less power. If some code causes us to instead waste power by JITing too much, then maybe our thresholds are wrong for all kinds of tabs.
Can you provide context for why you think this is good?