Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lyzadanger/f5867db95c6bc711acf48e43d1fe3580 to your computer and use it in GitHub Desktop.
Save lyzadanger/f5867db95c6bc711acf48e43d1fe3580 to your computer and use it in GitHub Desktop.
web-platform-test-7.1

Context

These notes and questions are the summary of my analysis of Section 7.1: Browsing Contexts in the WHATWG HTML spec and its related WPT test coverage in html/browsers/windows/.

While I'm a long-time spec consumer and standards-based web dev, this is my first project involving contribution to the https://github.com/w3c/web-platform-tests project.

General Test Running and Authorship Questions

These are notes or questions that arose from examining the tests for 7.1 but apply more broadly to the WPT infrastructure and conventions.

  • Most of the naming conventions I understand (via docs and some perusing of tools submodule), but what is the purpose of sub in filenaming schemes?

  • Running the test at html/browsers/windows/browsing-content-first-created.xhtml gives me inconsistent results. When run from the test runner (http://web-platform.test:8000/html/browsers/windows/browsing-context-first-created.xhtml) it always has a 2/4 pass rate (that's consistent, at least). When run directly, (http://web-platform.test:8000/html/browsers/windows/browsing-context-first-created.xhtml), sometimes I get a 3/4, other times a 2/4. For reference, it's the history.length test that is variant. Is it not kosher to run tests directly, or is this inconsistency worth noting? I get consistent results (albeit different ones) running the tests via both methods in my desktop FF.

  • Based on html/browsers/windows/support-open-cross-origin.sub.html, it looks like there might be some templating available (e.g. {{location[scheme]}}). I haven't yet been able to locate docs or source for this, though.

  • When I run manual tests via the test-runner web page, I see the pass/fail dialog UI overlay, but the manual test itself is not presented to me. The only way I've found to run them is to hit them directly in another window (via URL) and then come back to the test runner and hit pass/fail. Am I missing anything here?

  • Conceptual question WRT spec algorithms: While implementation results can be observed by looking at resulting objects via exposed APIs, how much investment is put in WPT into determining that steps in an algorithm run appropriately in sequence? That is, it seems like it'd be impossible to validate that a browser executes step 6 (set the window's associated document to document) before step 7 (updating the [[Window]] slot in the context's WindowProxy); all we can do is look at the manifestations of the resulting browsing context (via global objects, e.g.) and infer what we can, right? Where is the line here of what is testable and what's not?

    I see how one could test the completion of step 14 (add current document to session history) by looking at the resulting history, which is exposed, but is there any way (or purpose) to demonstrate that this happened after, say, The document's origin was set (step 8)? Or that sandboxing was setup correctly and in the proper order? I imagine these sorts of things are part of the puzzle overall but wanted to get a foundational sense to start from.

Section 7.1 Browsing Contexts Specification Notes

Discarded browsing contexts and documents

A Document's browsing context is the browsing context whose session history contains the Document, if any such browsing context exists and has not been discarded.

Is it possible for a browsing context to exist and be discarded? Section 7.3.4 Garbage collection and browsing contexts:

When a browsing context is discarded, the strong reference from the user agent itself to the browsing context must be severed, and all the Document objects for all the entries in the browsing context's session history must be discarded as well.

That is, is it possible for a Document to exist once its BrowsingContext has been discarded? Is there an edge case where it could happen?

Creator attributes

In the browsing-context-creating algorithm, several properties are referenced on the creator context (if any exists), including:

creator context security The result of executing Is environment settings object a secure context? on creator document's relevant settings object

creator context security is never mentioned again in the spec. Does it have a side effect or purpose here?

Notes and Questions on Tests in html/browsers/windows

Summary: JS/harness test that creates a new browsing context via an iframe and examines properties against what should exist on a newly-created browsing context per spec.

Potentially pedantic: This document has XHTML validation errors. Any value in making it HTML instead?

According to the spec (non-normative note), iframes typically create new browsing contexts. Is there any concern with having this test be entirely dependent on an iframe-created browsing context? Is there any value to extending this test (or creating an additional, related one) to test some of these properties on a browsing context created in a different way?

Useful to add tests here for some of the properties set up via creator browser context? (this test's browsing context does have a creator):

If browsingContext has a creator browsing context, then the origin of document is the creator origin.

(...)

If browsingContext has a creator browsing context, then set document's referrer to the serialization of creator URL.

Summary: Manual test. HTML contains an a[rel='noreferrer'] with a named target (target=reallydoesnotmatter). Links to support-close.html, which in turn presents a button with onclick=window.close(). The manual test should pass if clicking the button closes the window.

While I can see that this test tests that a separate browsing context was created by dint of window.close appropriately closing the opened window (and not another window or multiple windows, e.g.) but I can't immediately see a direct link to content in the spec itself under 7.1.

Summary: Manual test. HTML contains four links (a), split into two groups of two each. All four have cross-origin href values. The first two share a named target but have different hrefs (example.com vs. example.org). If the links are clicked in order, the second should replace the first in the same context (window).

The second two links share the same cross-origin href and the same named target attribute values, but are rel=noreferrer. Despite sharing name and href, these should generate two browsing contexts because of the noreferrer.

At first blush, this test seems like it might be testing behavior described in subsequent sections like 7.1.5—it's relevant testing to be sure, but is it in the right spot?

Summary: Based on my understanding of test naming conventions, I'm not yet clear on how this test would get run, but maybe it has something to do with the sub naming? It appears to be a manual test.

This test combines a cross-origin, rel=noreferrer a with a named target and tests that the name is retained/respected in the resulting context.

Again, at first glance, this seems to test behavior in later subsections, but I'm new to this...

Summary: JS/harness test. This test creates two hyperlinks (via DOM API) and appends them to the document. Both hyperlinks share the same values for target and both have rel=noreferrer. Their href values differ in the hash (#) at the end. Both link to support-named-null-opener.html (with the appended hash). support-named-null-opener.html in turn stuffs a value into localStorage and closes itself. Because documents (test and support-named-null-opener) are same-origin, localStorage can be used as a place to stash stuff and then check in the test to make sure that two windows (contexts) were created.

  • While the localStorage use in the supporting file and event listeners for it make sense, I don't understand the usage of localStorage.setItem("x", "close") in the test.

Once again, I see what this is testing but am having trouble drawing connections to content in sec 7.1 strictly speaking.

Summary: JS/harness test. This test creates a noreferrer hyperlink with target=_blank, linking to support-opener-null.html and triggers it. support-opener-null.html stashes the current value of window.opener in localStorage and the test inspects it (it should be null).

7.1 does touch on what happens in a new browsing context in the absence of a creator browser context (window.opener being null speaks to this—no creator browser context here). There seems to be maybe some cross-over here with behavior outlined in sec 7.1.5?

Summary: JS/harness test. This test looks at the top and name values of a browsing context created via creating an iframe (nested).

  • Does this more accurately test stuff in sec 7.1.1 Nested Browsing Contexts or is it indeed cogent here?
  • Would this test benefit from more robust test cleanup? There is a condition in which windows wouldn't close themselves if not all messages came in.

This test and the one that follows isn't as immediately legible/self-documenting as the other tests in this section and might benefit, when/if they're touched, from some commenting or clarity.

Summary: Manual test. This test combines cross-origin href values with overlapping named target values. There are three links total but only two target names—but you get three windows because of the cross-origin nature of the URLs involved.

Same questions stand here as for some of the previous tests: tests here make sense but I'm curious if they belong in a subdirectory?

Notes on Test Coverage and Possible Action Items

  • There seems to be a real emphasis on noreferrer tests. Curious how that arose!
  • It seems that origin values could be tested in this area, whether set as the creator's origin or an opaque origin by looking at the computed domain property (for contexts with a creator context and those without, respectively). Or should these tests be in the 7.5 Origin section? In any case, I don't see tests for the lines in this part of the spec concerned with origin.
  • Some of the creator-context attributes (URL, baseURL) that have relevance to created browser contexts seem like they could be further tested.
  • Any value in testing a situation in which no browsing context is created? Or is that coming at things backward?
  • Any value in testing referrer-policy-related things by using other mechanisms for setting referrer policy (meta element, referrerpolicy attribute)?
@jugglinmike
Copy link

Hi Lyza,

Some of your questions are more about prioritization than technical ambiguity.
I've mostly avoided commenting on those because I assume they are intended for
your client.

Most of the naming conventions I understand (via docs and some perusing of
tools submodule), but what is the purpose of sub in filenaming schemes?

See http://web-platform-tests.org/writing-tests/server-features.html

Based on html/browsers/windows/support-open-cross-origin.sub.html , it looks
like there might be some templating available (e.g. {{location[scheme]}} ). I
haven't yet been able to locate docs or source for this, though.

I believe this is also covered in http://web-platform-tests.org/writing-tests/server-features.html

When I run manual tests via the test-runner web page [...] Am I missing
anything here?

I can't say; I haven't spent any time with the manual tests.

how much investment is put in WPT into determining that steps in an algorithm
run appropriately in sequence? [...] all we can do is look at the
manifestations of the resulting browsing context (via global objects, e.g.)
and infer what we can, right? Where is the line here of what is testable and
what's not?

I don't think there is a hard-and-fast rule. In Test262, I slowly built up an
intuition about what was observable and what wasn't. As that improved, it
became easier to recognize the opportunities for algorithm sequence testing.

As for "investment," that's a question for your client. These subtle details
are less likely to be observed in the wild, but I don't think that makes them
any less relevant for testing purposes.

That is, is it possible for a Document to exist once its BrowsingContext has
been discarded? Is there an edge case where it could happen?

I may be misunderstanding the question, because based on the quoted text (in
particular, the use of the word, "must") the answer seems to be unequivocally,
"no."

creator context security is never mentioned again in the spec. Does it have a
side effect or purpose here?

This is likely defined for use in other specifications.

Is there any value to extending this test (or creating an additional, related
one) to test some of these properties on a browsing context created in a
different way?

There's value in a test like that, but it should probably be organized
according the section that specifies the creation. Here, we're interested in
what it means to be a browsing context, not in any particular way one comes
about. Think of the iframe as a "test implementation detail." It's a little
more specific than the spec requires, but there is no more "pure" alternative
available to runtime like, say, a BrowsingContext constructor. This is sort
of like how tests for the paragraph element might rely on
document.createElement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment