Skip to content

Conversation

aryanrahar
Copy link

What / Why

  • Clarifies TimesFM’s training‑time patch masking per the ICML paper: sample r ∈ [0, p−1], mask the
    first r positions in the first input patch, starting at the beginning of the context window for each
    series in a batch.
  • Adds a tiny Torch reference utility (src/timesfm/train_utils/masking.py) + tests
    (v1/tests/test_training_mask.py) to make the paper’s description immediately reproducible.
  • No change to inference.

Docs

  • Adds a README/docs subsection “Training‑time Patch Masking (per paper)” with links to the paper and
    to this issue.

Notes

  • Repo is inference‑focused for v2.5; the helper is illustrative for users training out‑of‑tree.

VERIFICATION CHECKLIST

  • Run unit tests: pytest -q — expect all tests to pass.
  • Optional: run a small demo to view sampled r and confirm that the first patch is partially zeroed
    according to r.
  • Confirm that no core inference code paths changed.

Copy link

google-cla bot commented Oct 1, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants