1. Identify all headings in a page, on page load
2. Generate a string out of each heading text
- Should be URL-friendly as it will be part of a URL.
- Should not start with a number as it would be an invalid
id
. - Should be sanitized with unique values at the end (
-1
,-2
) in case there are one or more headings with the same text. To keep it as clean as possible we should do this only when duplicates are detected. (We cannot use random string generators likeuuid
as the links would not be the same when visiting the page again). - Should be generated by a function that's properly tested with normal and edge-case scenarios, which always gives us what we expect. The generated
id/fragment
should always be the same given we give it the same input. - One should be able to limit the script to headings within a certain element type (ie.
article
).
3. Set it as the heading's id
- but only if it doesn't already have an id
- The observation in italic is important as we don't want to break the functionality of elements like the
h2
s, which already have had their IDs generated. - This should only happen after we have given IDs to elements through other scripts!
1., 2. and 3. give us what we need to link directly to any element that accepts an ID in a given page.
4. Construct the full URL by getting the current page's location and combining it with what we got on step 2
pageUrl + '#' + fragment
https://www.example.com/page#fragment
The fragment
part here uses the very same function that has generated the id
, which was in turn assigned to its respective heading.
5. Create some form of way to copy the full URL the heading gets, since it now has an id
we can refer to
We could for instance have a "link" icon that is shown when we hover over the heading. This icon would contain the full URL that leads to that element. Headings on GitHub README
files are a live example of such implementation: https://github.com/dreamyguy/moments.
More about fragments: https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Identifying_resources_on_the_Web#Fragment
- Links need to be copied manually from the page they are generated, because we cannot garantee that the
id
being assigned to it is the heading's text. It could be anid
was already assigned to it (point 3.). - Links are feeble. If the text in the heading changes, the generated/assigned
id
will be different and the full URL will change. That's desired as we always generate the IDs dynamically on page load.
6. To correctly jump to IDs, we should delay the jumping long enough to allow all IDs to be generated and assigned to their headings
- We wait for the page to fully load.
- We set a delay through
setTimeout
, so everyting gets generated and assigned. We're talking milliseconds. - We extract the
fragment
from the URL so that we can jump to the relevant headingid
when we know where the relevant heading is on the page. - We identify all headings through their IDs.
- We find each heading position relative to the top of the page.
- At this point we know which heading matches the
fragment
in the full URL. - We do the jump as we now know where the heading is!