Skip to content

Instantly share code, notes, and snippets.

@briankung
Last active May 25, 2022 17:11
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save briankung/fedfbe61c628048075bbb4cc948daab6 to your computer and use it in GitHub Desktop.
Save briankung/fedfbe61c628048075bbb4cc948daab6 to your computer and use it in GitHub Desktop.
[RFC] Forcing GitHub Gists into a blog-shaped hole

Well, I didn't know this was going to be as hard as it turned out to be, but sometime yesterday I decided that I didn't really want to manage a backend and GitHub Gists were where my technical thoughts ended up, anyway (and I didn't want to handle embeds, either) so why not just hit up Gists for my tech blog? That way, I'd get syntax highlighted embeds, comments, version management, and more.

As with a lot of technical things, it was much more difficult than I gave it credit for at first, mostly because of embeds and the CORS policy on GitHub Gists.

The first step was to get my GitHub Pages running locally, which I did with this convenient python3 one-liner: python -m http.server 8000. Loading up http://localhost:8000/ gave me a look at my static files. I set up a barebones html file with some script tags for the majority of the logic.

I knew I could access the GitHub Gists through some sort of API, but I wasn't sure what I got with it. Shouldn't be a problem, right? Sure enough, it was as easy as hitting those public API endpoints and getting back JSON. One hitch was that the gist endpoint returned a list of files, but not the content of the files, themselves. That, I would have to request from the url stored in raw_url on a file-by-file basis. Still, I thought that was easy enough, but at the time I wasn't thinking of embedded gists with syntax highlighting.

Ignoring fitting pipes together to massage the data just the way I wanted, I got the raw files back only to realize that what I wanted was: 1) the markdown files to be parsed into HTML, 2) the embeds to be handled by github. Then I would simply append them to the body of the document. The first was easy - I found marked by way of the deprecated github-flavored-markdown library, found a CDN link for it, and threw it in a script tag.

As for the syntax-highlighted embeds, it turns out that there is a way to request the html and stylesheets for a gist, and what's more, you can request a [single file] from a gist, using the non-API domain (https://gist.github.com instead of https://api.github.com/gists). At first, I didn't read closely enough and simply queried the non-API domain for JSON. This returned a JSON object that had, among other things, a div member that contained the embed HTML that I wanted to append to my HTML body. However, when I tried that, I got a Cross Origin Resource Sharing (CORS) access warning - for security reasons, you aren't allowed to write to a user's DOM from a non-originating resource.

The answer was to use JSONP if the remote server supported it, which GitHub Gists does. You just tack on a &callback=someCallbackName to the gist URL. This executes the someCallbackName callback on the client with the resulting JSON passed into the function. It's the equivalent of calling window.someCallbackName(jsonData).

However, I hadn't used JSONP up until this point, and I didn't realize how much of a pain it could be. I was executing most of the logic in an asynchronous function cleverly called main because I wanted to play with async / await instead of using heavily nested fetch(...).then(doSomething) statements. This introduced a race condition where, when I attempted to append the elements to the document.body, it would be empty because the async function I was using hadn't resolved yet (even though it looked like it should have - for some reason the await keyword wasn't doing what I expected it to).

Side note: since I was writing this in ES6, I obviously don't care about older browsers...or even newer browsers. I just want to see it on my own browser. This is a blog, but fuck you to anyone who isn't running exactly the same software as me 🖕

...Maybe I'll add a babel transpilation step later.

Anyway, I ended up doing something horrendous to handle it:

  1. Because I need a global callback to handle the response from the server, I would need to define one on the window object. This took the form of window[callbackName] = function (gistData) {...} when handling non-Markdown gist files.

  2. Because I am a bad and lazy person, I decided to redefine this global callback on a per-file basis in the same lexical block of code that handled the array I was mutating, so that it would have access to bodyContents and section. There is certainly a better way to do this, but I will delegate that to Future Brian. I honestly don't have too much faith in him, either.

    PS, bodyContents was the array I was storing created elements in, and I created a <section> element per gist file (Markdown or otherwise) to store the HTML. The idea with the sections was that the HTML element hierarchy for a gist would go article -> sections, where article represented a gist and a section represented a file in that gist - and yes, gists support multiple files.

  3. Finally, that pesky race condition was still giving me problems - my solution was to steal this simple debounce function (hey, Gist buddies!) and throw the mutations to the document body into a debounced function. Debouncing a function just means that the function holds off from executing until some specified time after the last time the function was invoked. In this case, it defaults to 250ms. So if I say "hey, do thingA...do thingA...do thingA..." the function will only execute once, some time after the last invocation. This is important because, to be frank, shit was going to be going down everywhere, and I wasn't sure in what order. I had async functions up the wazoo, HTTP requests for every gist and file, and who knows when what was going to resolve.

    Through testing, I was reasonably sure that the window callbacks are invoked only after the async main function resolves, though, so I tacked on the debounced fillBody function call at the end of the window callbacks. That and the debounce should ensure that it was the last thing called, after all my DOM nodes were constructed and I could safely use it to fill in the body of the document. Another benefit of the debounce is that I could afford to call the method as many times as I wanted, but only invoking it once. If I called a naive fillBody function four times, I would have 4 copies of all my blog posts pasted to the page.

There were a few other fun/frustrating things about this, like filtering out random gists from blog posts, but the above was what I spent a majority of the time one. I'll attach the file below for your viewing pleasure - it probably won't work too well, since I'm not including CSS files on a per-file basis (and I suspect github optimizes by tree-shaking the CSS to remove what isn't used in a given embed). But at least it works (kind of) on my local server at the moment.

I should probably just use a framework like Vue to manage data and reactivity better, but...ah well. Ship it! 🚢

EDIT - and of course things are still out of order on first load. Sounds like a job for...Future Brian! 😂

EDIT - got it. Looks like map, forEach and other higher order functions don't reliably execute their loops in order if you use them with the async syntax of urls.forEach(async (url) => { console.log(url) }), so I just replaced them with for (x in y) syntax.

<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>briankung.xyz - technical musings</title>
<link rel="stylesheet" href="https://assets-cdn.github.com/assets/gist-embed-8c6ade0d3779da026346afb9bf324f67.css"></link>
</head>
<body>
<!-- <article>
<header></header>
<section></section>
<footer></footer>
</article> -->
</body>
</html>
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script type="text/javascript">
const GISTS_URL = 'https://api.github.com/users/briankung/gists'
let gists, gistFileUrls, bodyContents
bodyContents = []
const debounce = (a,b=250,c) => (...d)=>clearTimeout(c,c=setTimeout(()=>a(...d),b))
const fillBody = debounce((contents) => {
contents.forEach(c => document.body.append(c))
})
async function main () {
response = await fetch(GISTS_URL)
gists = await response.json()
gists = gists.filter(gist => /\brfc\b/i.test(gist.description))
gistFileUrls = gists.map(
(gist) => {
return Object.entries(gist.files).map(([_, obj]) => obj.raw_url )
}
)
await gistFileUrls.forEach(
async (gistUrls) => {
const results = await gistUrls.map(async (url) => {
const response = await fetch(url)
const text = await response.text()
const gistId = url.match(/briankung\/(.+)\/raw/)[1]
const fileName = url.split('/').pop()
const section = document.createElement('section')
if (url.endsWith('.md')) {
section.innerHTML = marked(text)
bodyContents.push(section)
} else {
const callbackName = `cb${gistId}${fileName.replace('.', '')}`
const gistJsonpUrl = `https://gist.github.com/briankung/${gistId}.json?file=${fileName}&callback=${callbackName}`
const script = document.createElement('script')
const div = document.createElement('div')
script.src = gistJsonpUrl
div.id = `${gistId}${fileName}`
bodyContents.push(div)
window[callbackName] = function (gistData) {
const i = bodyContents.indexOf(div)
section.innerHTML = gistData.div
bodyContents[i] = section
fillBody(bodyContents)
}
document.body.appendChild(script)
}
})
}
)
}
main()
</script>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment