Skip to content

Instantly share code, notes, and snippets.

@1502shivam-singh
Last active May 3, 2023 19:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save 1502shivam-singh/08dfa68024c4503f12cc22df81c4ff96 to your computer and use it in GitHub Desktop.
Save 1502shivam-singh/08dfa68024c4503f12cc22df81c4ff96 to your computer and use it in GitHub Desktop.
Case study and Work product - Redesigning Apache APISIX's website

Case study and Work product - Redesigning Apache APISIX's website

The Apache Software Foundation

Hi, I am Shivam Singh, a Computer Science student from Indian Institute of Information Technology, Una (India) pursuing Bachelor of Technology in Computer Science and Engineering.

UPDATE : As some changes have been made in the original website, to look at my specific redesign go to the below mentioned URL
Updated - https://apisix-re.netlify.app/

UPDATE : As of October 5, 2021, my redesign has finally been deployed and is now live 🎉
Before my redesign - https://web.archive.org/web/20210810140337/https://apisix.apache.org/

This work-product compiles all the work that I did this summer with Apache APISIX, mentioning all the technical as well as design details in a concise yet detailed manner. Hope it helps you learn something new ~

Summary

This summer I worked alongside Apache APISIX on multiple tasks centrally focused on "Redesigning Apache APISIX's website".

Initially, the project was solely focused around redesigning and coding APISIX's landing page. But as the GSoC timeline progressed, my expectations from this project started escalating gradually. So much so, that alongside the assigned GSoC work, I ended up designing new pages for the website, reviewed documentation content additions and helped out with design decisions on the website.

This all over in the end has resulted in a better website from the point when I started contributing to Apache APISIX, which feels satisfactory and a target achieved (Though there is still more to come). Even Though the work expected of me from GSoC has been completed, I still plan to do much more stuff with Apache APISIX in the upcoming future.

To see all the code that I wrote in this period (alongside all the long weeks spent on design thinking and prototyping) checkout the links in the Deliverables section.

Work details

The work done by me during this GSoC period on a high-level can be categorised in 2 parts

  • Design thinking (UX)
  • UX Engineering

First one (Design thinking) basically involves the process which I followed to come out with the designs for this project. My process focused on seeking out inspiration from the best stuff out their on the internet and mix my own flavour into it to come out with something that is asethetically amazing, meaningful and equally original.

I got my designs reviewed from my mentors and moved forward with feedback on what to improve, following an agile workflow, focusing on creating an awestrucking website for APISIX, which resonates with the quality of work I am able to produce.

The second and equally challenging part of actually engineering the website was also an interesting journey. In terms of the tech-stack, I worked on the following

The layouting and styling stuff was done within a docusaurus documentation-website wrapper using React.js combined with CSS, JS and some HTML here and there.

I coded WebGL shaders from scratch to create the enticing visual piece on the hero-section of our new website. Also, used Three.js to create a rotating spherical frame for the GitHub invite section of our website.

As of GSAP, I mainly used it for creating the animated interactions and looping infographics on our website and also for tweening JS variable values to use with the WebGL shaders. GSAP doesn't provide anything special in itself except the ability to write animation timelines in JS. The style or design of animations using GSAP is not much different from CSS timelines, infact it's almost the same except the fact that you can't write CSS timelines in JS explicitly.

Alongside all this, I also used the browser-native Intersection observer API for animation triggering and performance improvements.

Multiple sections on the redesign have some heavy problem solving happening under the hood, firstly to achieve the desired interactions correctly and secondly to keep everything performant overall. I have discussed more on this in the next section, Challenges and learnings

Challenges and learnings

Thoroughout the process of creating the redesign and coding it, I faced some challenges that were really interesting to solve and taught me a thing or two I didn't know earlier. Mentioning a pair of them here (Mentioning them all will need a different blog :P)

Hero section design

This was the biggest challenge for me. As an UX designer, I understand the importance of the hero-section of a website and it's significance. It's the first section that appears when a user visits your website, so it's really important to give utmost importance to this section to present a positive viewpoint of our product to the user.

I made a list of various keywords like performance, open-source, traffic-management etc. that are related to APISIX in some way. After a lot of days of brainstorming and endless browsing of the internet for some inspiration, I came up with this idea of using flow-fields as a metphor to the smooth, frictionless and synchronised performance of APISIX. It was almost like one of those instances when an idea suddenly hits your head giving you that aha! moment.

Now, with some ideas in my head I wrote a shader to procedurally generate a flow field from a 2-dimensional array of particles. The visual piece basically is a particle system implemented to flow according to the output from a simplex noise function and inputs received from the hovered mouse pointer's position.

Actually, I also implemented a whole system to map mouse positions over WebGL canvas in page units pageX and pageY to the corresponding Normalised device coordinates on the WebGL canvas, no matter the size of the Canvas or position of it on the page. Discussing the implementation of this is detailed and will need another blog, so for a high level viewpoint this system uses the getBoundingClientRect() function alongside some mathematical transformation to create the mapping, with using GSAP to pass over values with smoothness (twinned values) for no jerky movements.

Optimizing performance

Now everything is setup, the animations are working, the interactions are responsive and rendering is also happening correctly, but I notice that scrolls are jerky and one time or another the scroll just jumps, eventually leading to a bad experience for the user.

All of this was going on because a ton of things were happening all at the same time on one single thread used by the webpage. The site is pretty heavy now honestly, given that there are almost 6-7 looping animations happerning all at once (no matter they are visible or not) with 2 WebGL contexts rendering at all times. So one can understand that this needs some serious optimization. I understood this and went ahead in doing so.

Firstly, I went about shedding off the weight of rendering WebGL graphics when not visible on screen. For this I utilised the native intersection observer API to check whether the canvas is inside the viewport or not and render accordingly. Below code snippet is one example. This renderers only when the Canvas is atleast 1% (0.01 * 100) inside the viewport, else it discards the frame and stops animating

  let canvasObserver = new IntersectionObserver(onCanvasIntersection, {
    root: null,
    threshold: 0.01,
  });

  function onCanvasIntersection(entries, opts){
      entries.forEach(entry =>  {
          if (entry.isIntersecting && isLoaded) {
              if (isLoaded && !isRendering) {
                animate();
              } else {
                console.log("Loading")
              }
          } else {
            if (animationFrame) {
              cancelAnimationFrame(animationFrame);
              isRendering = false;
            }
          }
      }
    );
  }  

Similarly I did for the other canvas too.

Secondly, I made the images in the website to load according to lazy loading strategy to get that performance boost albeit small, we still can't underestimate it here.

Finally and the most important one, to control the animation of the looping infographics too I utilised the power of intersection observer to stop the animations once the infographics are offscreen. The infographic animations are basically SVG animations that are implemented using GSAP timelines. So we can control them using JS. Here's a code-snippet that does that

    const elems = [performance.current, security.current, scale.current, dynamic.current, multiplatform.current];
   
    for (let i=0; i<5; i++) {
        observers.push(new IntersectionObserver((entries, opts)=>{
            entries.forEach(entry =>  {
                if (entry.isIntersecting) {
                    tweenTls[i].paused(false);
                } else {
                    tweenTls[i].paused(true);
                }
            }
          );
        }, {
            root: null,
            threshold: .2
        }));
    }

    observers.forEach((it, index)=>{
        it.observe(elems[index]);
    });

Here, intersection observers of all the SVGs are pushed inside an observers array just to keep the code DRY. As can be understood from this the timelines play only when atleast 20% (0.2 * 100) of the SVG infographic is in the viewport.

Also, I forgot to mention, destroying all the resources that we have created is also a really good practice to keep the memory in check. This snippet does that for the above example

    return () => {
        observers.forEach((it, index)=>{
            it.disconnect();
        });
        tweenTls.forEach((it, index)=>{
            it.pause(0).kill(true);
        });
    }    

All this done, led to an awesome 40% increase in performance benchmarks of the website (Lighthouse). This challenge really taught me something I hadn't experienced earlier and added a new trick in my arsenal for my future endeavours.

Deliverables

An improved website for Apache APISIX with clarified and detailed product description in as concise, no fluff way possible.

As I was working on redesigning the APISIX website, all my changes to original website need to be merged at once to avoid breaking changes and bad experiences for site visitors. So, I was working on a complete different copy-repo of the original apisix-website repo. For this reason my PRs to redesign might still be in review as of the writing time of this work product. Will update this document to link to the commits, once the changes are properly merged. For now, here are the PR links -

Also, to find the mockups on which the redesigned website is based, you can check out these figma boards -

Until the time the PRs are merged in the project, you can experience the redesigned website at - http://apisix-re.netlify.app/ Everything you see on the landing page (from the copywriting to layouting to animations) was designed and engineered by me during this GSoC period.

Acknowledgements

This journey of GSoC was a really interesting one for me. Thanks to my mentors Zhiyuan Ju, Shuyang Wu and Ming Wen who provided me the right amount of autonomy and freedom I needed to experiment stuff on my own and provide results that surpassed their expectations. It was quite the experience and I hope to keep this stuff going with Apache APISIX helping them maintain the apisix-website as well as their other projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment