We propose new HTML Canvas APIs for rendering HTML content into the canvas for Canvas 2D and WebGL.
Authors: Stephen Chenney, Chris Harrelson, Khushal Sagar, Vladimir Levin, Fernando Serboncini
Champions: Stephen Chenney, Chris Harrelson
This proposal is a subset of a previous proposal covering APIs to allow live HTML elements.
There is no web API to easily render complex layouts of text and other content into a <canvas>
. As a result, <canvas>
-based content suffers in accessibility, internationalization, performance and quality.
- Styled, Laid Out Content in Canvas. There’s a strong need for better styled text support in Canvas. Examples include chart components (legend, axes, etc.), rich content boxes in creative tools, and in-game menus.
- Accessibility Improvements. There is currently no guarantee that the canvas fallback content currently used for
<canvas>
accessibility always matches the rendered content, and such fallback content can be hard to generate. With this API, elements drawn into the canvas bitmap will match their corresponding canvas fallback. - Composing HTML Elements with Shaders. A limited set of CSS shaders, such as filter effects, are already available, but there is a desire to use general WebGL shaders with HTML.
- HTML Rendering in a 3D Context. 3D aspects of sites and games need to render rich 2D content into surfaces within a 3D scene.
- The
layoutsubtree
attribute on a<canvas>
element allows its descendant elements to have layout (*), and causes the direct children of the<canvas>
to have a stacking context and become a containing block for all descendants. Descendant elements of the<canvas>
still do not paint or hit-test, and are not discovered by UA algorithms like find-in-page. - The
CanvasRenderingContext2D.drawHTML(element, x, y)
method renderselement
and its subtree into a 2D canvas at offset x and y, so long aselement
is a direct child of the<canvas>
. It has no effect iflayoutsubtree
is not specified on the<canvas>
. - The
WebGLRenderingContext.texHTML2D(..., element)
method renderselement
into a WebGL texture. It has no effect iflayoutsubtree
is not specified on the<canvas>
. - The
CanvasRenderingContext2D.setHitTestRegions([{element: ., rect: {x: x, y: y, width: ..., height: ...}, ...])
(andWebGLRenderingContext.setHitTestRegions(...)
) API takes a list of elements and<canvas>
-relative rects indicating where the element paints relative to the backing buffer of the canvas. These rects are then used to redirect hit tests for mouse and touch events automatically from the<canvas>
element to the drawn element.
(*) Without layoutsubtree
, geometry APIs such as getBoundingClientRect()
on these elements return an empty rect. They do have computed styles, however, and are keyboard-focusable.
drawHTML(element ...)
takes the CTM (current transform matrix) of the canvas into consideration. The image drawn into the canvas is sized to element
's devicePixelContentBox
; content outside those bounds (including ink and layout overflow) are clipped. The drawHTML(element, x, y, dwidth, dheight)
variant resizes the image of element
's subtree to dwidth
and dheight
.
In addition, a fireOnEveryPaint
option is added to ResizeObserverOptions
, allowing script to be notified whenever any descendants of a <canvas>
may render differently, so they can be redrawn. The callback to the resize observer will be called at resize observer timing, which is after DOM style and layout, but before paint.
The same element may be drawn multiple times.
Once drawn, the resulting canvas image is static. Subsequent changes to the element will not be reflected in the canvas, so the element must be explicitly redrawn if an author wishes to see the changes.
The descendant elements of the <canvas>
are considered fallback content used to provide accessibility information.
See Issue#11 for an ongoing discussion of accessibility concerns.
Offscreen canvas contexts and detached canvases are not supported because drawing DOM content when the canvas is not in the DOM poses technical challenges. See Issue#2 for further discussion.
NOTE: When using this feature in a DevTrial, take steps to avoid leaking private information, as privacy controls to disable painting of PII are still in-progress.
interface CanvasRenderingContext2D {
...
[RaisesException]
void drawHTML(Element element, unrestricted double x, unrestricted double y);
[RaisesException]
void drawHTML(Element element, unrestricted double x, unrestricted double y,
unrestricted double dwidth, unrestricted double dheight);
interface WebGLRenderingContext {
...
[RaisesException]
void texHTML2D(GLenum target, GLint level, GLint internalformat,
GLenum format, GLenum type, Element element);
See here to see an example of how to use the API. It should render like the following (the blue rectangle indicates the bounds of the <canvas>
, and the black the element passed to drawHTML). It draws like this:
See here for an example of how to use the WebGL texHTML2D
API to populate a GL texture with HTML content.
The example should render an animated cube, like in the following snapshot. Note how the border box fills the entire face of the cube.
To adjust that, modify the texture coordinates for rendering the cube and possibly adjust the texture wrap
parameters. Or, wrap the content in a larger <div>
and draw the <div>
. It draws like this:
A demo of the same thing using an experimental extension of three.js is here. Further instructions and context are here.
See here for an example utilizing the setHitTestRegions
and fireOnEveryPaint
APIs to enable use of interactive elements like <input>
within a canvas. The output after clicking on the input element and typing in "my input" looks like this:
Both painting (via canvas pixel readbacks or timing attacks) and invalidation (via fireOnEveryPaint
) have the potential to leak sensitive information, and this is prevented by excluding sensitive information when painting. While an exhaustive list cannot be enumerated, sensitive information includes:
- cross-origin data in embedded content (e.g.,
<iframe>
,<img>
),<url>
references (e.g.,background-image
,clip-path
), and SVG (e.g.,<use>
). - system colors, themes, or preferences.
- spelling and grammar markers.
- search text (find-in-page) and text-fragment (fragment url) markers.
- visited link information.
- form autofill information not otherwise available to javascript.
SVG's <foreignObject>
can be combined with data uri images and canvas to access the pixel data of HTML content (example), and implementations currently have mitigations to prevent leaking sensitive content. As an example, an <input>
with a spelling error is still painted, but any indication of spelling errors, which could expose the user's spelling dictionary, is not painted. Similar mitigations should be used for drawHTML
, but need to be expanded to cover additional cases.
The HTML-in-Canvas features may be enabled by passing the --enable-blink-features=CanvasDrawElement
to Chrome Canary versions later than 138.0.7175.0.
Notes for dev trial usage:
- The methods were recently renamed:
drawHTML
was previouslydrawElement
andtexHTML2D
was formerlytexElement2D
. The rename will land shortly in Chrome Canary. The change was made at developers' request to avoid confusion with existing WebGL methods. The old names will continue to work until at least Chrome 145. - The features are currently under active development and changes to the API may happen at any time, though we make every effort to avoid unnecessary churn.
- Not all personal information (PII) is currently prevented from being painted, so take extreme care to avoid leaking PII in any demos.
- The space of possible HTML content is enormous and only a tiny fraction has been tested with
drawHTML
. - Interactive elements (such as links, forms or buttons) can be drawn into the canvas, but are not automatically interactive.
Other known limitations:
- Cross-origin iframes are not rendered
We are most interested in feedback on the following topics:
- What content works, and what fails? Which failure modes are most important to fix?
- Is necessary support missing for some flavors of Canvas rendering contexts?
- How does the feature interact with accessibility features? How can accessibility support be improved?
Please file bugs or design issues here.