Skip to content

feat(PackageCacheFile): add a in-memory expiry map#42531

Open
Thooms wants to merge 1 commit intorenovatebot:mainfrom
Thooms:paps/perf-cache-cleanup
Open

feat(PackageCacheFile): add a in-memory expiry map#42531
Thooms wants to merge 1 commit intorenovatebot:mainfrom
Thooms:paps/perf-cache-cleanup

Conversation

@Thooms
Copy link
Copy Markdown

@Thooms Thooms commented Apr 10, 2026

Changes

This PR introduces a small in-memory map that stores expiry times for keys in the PackageCacheFile implementation.
We noticed in our deployment that it was starting to bottleneck on a non-small amount of repositories.

This allows for faster expiry lookup, which makes for the bulk of the loop on destroy(). Note that this doesn't remove the overhead of looping through all entries, but at least doesn't load the data from disk with a deserialization.

Result of the provided benchmark:

 ✓ lib/util/cache/package/impl/file.bench.ts > PackageCacheFile destroy() 8823ms
     name                                                           hz      min      max     mean      p75      p99     p995     p999      rme  samples
   · cold path — no expiryMap (disk read per entry)             2.7569   357.96   369.71   362.73   364.04   369.71   369.71   369.71   ±0.69%       10
   · warm path — expiryMap populated via set() (no disk read)  18.6057  49.1706  75.6150  53.7469  50.6335  75.6150  75.6150  75.6150  ±11.38%       10

 BENCH  Summary

  warm path — expiryMap populated via set() (no disk read) - lib/util/cache/package/impl/file.bench.ts > PackageCacheFile destroy()
    6.75x faster than cold path — no expiryMap (disk read per entry)

Context

Please select one of the following:

  • This closes an existing Issue, Closes: #
  • This doesn't close an Issue, but I accept the risk that this PR may be closed if maintainers disagree with its opening or implementation

AI assistance disclosure

Did you use AI tools to create any part of this pull request?

Please select one option and, if yes, briefly describe how AI was used (e.g., code, tests, docs) and which tool(s) you used.

  • No — I did not use AI for this contribution.
  • Yes — minimal assistance (e.g., IDE autocomplete, small code completions, grammar fixes).
  • Yes — substantive assistance (AI-generated non‑trivial portions of code, tests, or documentation).
  • Yes — other (please describe):

Documentation (please check one with an [x])

  • I have updated the documentation, or
  • No documentation update is required

How I've tested my work (please select one)

I have verified these changes via:

  • Code inspection only, or
  • Newly added/modified unit tests, or
  • No unit tests, but ran on a real repository, or
  • Both unit tests + ran on a real repository

Tested also on an internal repository with indeed a ~50% speedup on cache cleanup.

@github-actions github-actions bot requested a review from viceice April 10, 2026 10:45
This allows for faster expiry lookup, which makes for the bulk of
the loop on destroy(). Note that this doesn't remove the overhead
of looping through all entries, but at least doesn't load the
data from disk with a deserialization.
@Thooms Thooms force-pushed the paps/perf-cache-cleanup branch from 7373434 to 48bf91b Compare April 10, 2026 10:49
@viceice viceice requested a review from zharinov April 10, 2026 17:22
@zharinov
Copy link
Copy Markdown
Collaborator

Hey, will be able to review the PR in couple hours

Copy link
Copy Markdown
Collaborator

@zharinov zharinov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't agree with general direction of how this should be fixed. Stay tuned, I'm working on the PR which should help solve your problem using another approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants