Skip to content

Faster rendering of text and stderr outputs #141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft

Conversation

krassowski
Copy link
Owner

@krassowski krassowski commented Jan 7, 2025

References

Fixes # 16957

Code changes

Makes the auto-linking logic execute asynchronously, at the speed matching the approximate refresh rate of standard monitors. Appends new nodes in the textual streams without removing the existing minimising layout computation cost.

  1. Adds internal, private logic for managing rendering in step with frame rendering using requestAnimationFrame.
    • To ensure that no frames are dropped (i.e. no single frame waits longer than 1s/30 frames ~= 0.33ms) in notebooks with multiple large outputs we ensure that rendering of only one output is rendered at a time, with a time-based limit on how many fragments of text are auto-linked for a single frame
    • To make sure that all outputs have some useful presentation we cycle through the rendering queue for textual outputs which need autolinking
  2. Adds logic for discovering which parts of the previously rendered DOM can be reused when streaming text. Similar approach was previously implemented for autolinking logic. Briefly, the previously streamed text can be reused if:
    • it did not contain partial link (e.g. chunk one is www. and chunk two is example.com)
    • there was no deletion with \b or \r in the new chunk
  3. Adds a logic to retain selection when updating text. This is required because when we are replacing www.example.com in partially rendered text output aaa www.example.com ccc we are also implicitly destroying the selection. This also leads to an improvement to status quo where selection was being destroyed on each stream chunk being added.

User-facing changes

It's faster

The output content is displayed when it gets returned from the kernel as soon as possible - no more choking the rendering pipeline due to too much output streamed at once.

The links show up with a delay

The links are displayed as soon as these are processed

Selection is fun again

Selection of outputs that is being streamed is not destroyed when new output arrives

Future work

Potential improvements:

  • We could try to infer the refresh rate of the monitor using intervals between requestAnimationFrame and use it instead of a hard-coded time limit; this might improve the experience for users with high-refresh rate monitors also using high-frequency CPUs
  • We could avoid the layout cost by imposing size restriction on the output if no new text was streamed (but it is only being auto-linked). This is because the links should be rendered using the same font (but only different colour).
    • I attempted it but in initial attempts I did not see significant gains
  • Instead of constantly calling requestAnimationFrame (and cancelling it) we could call it once and update arguments if new chunk is streamed; this is because there is an overhead associated (approx 0.6s for each 10s so not terrible, but noticeable); instead we could pass state via a WeakMap or similar.

Backwards-incompatible changes

None

@krassowski krassowski added the enhancement New feature or request label Jan 7, 2025
// full windowing notebook mode the output nodes are never removed,
// but instead the cell gets hidden (usually with `display: none`
// or in case of the active cell - opacity tricks).
if (!wasEverVisible) {

Check warning

Code scanning / CodeQL

Useless conditional Warning

This negation always evaluates to false.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pkg:rendermime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant