Skip to content

Instantly share code, notes, and snippets.

@tracend
Last active August 29, 2015 14:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tracend/b4049efbf106a1ecda4a to your computer and use it in GitHub Desktop.
Save tracend/b4049efbf106a1ecda4a to your computer and use it in GitHub Desktop.
Ready for Network-based 3D #makesites #insider

Ready for Network-based 3D

This is an anniversary piece to commemorate two years since the conception of Construct. It is also a call to action for everyone interested in participating in the next wave of realtime 3D

WebGL is officially supported in all platforms that matter and there's been a lot of movement towards creating concept 3D applications in the browser. These past years we all experimented, created libraries, gave talks, wrote articles. But all this is still not enough.

The current state is problematic

For the general public, browser games still have a glare of their "guilty" past, being lo-fi and mildly engaging; they have yet to prove that they can create a captivating experience like regular AAA games.

The problem starts with tech aficionado "supporters" trying to own innovation. Various products have surfaced that bind developers in their proprietary ways. This goes against the basic principals of how the Web is built and similar attempts that went against open standards have failed many times in the past. There's no question that these products will have the same fate.

After talking with developers on different occasions, there seems to be a problem with identifying the difference between a groundbreaking technology and a technology fad. WebGL is put in the same basket as Flash and Chrome's Native Client, when it should put on an equal level as DirectX and OpenGL. As its counterparts in native programming, it provides the ground for creating 3D engines, using a limited yet specific set of instructions. And for the Web, the 3D engine interfaces with JavaScript.

Vested interest in existing platforms create a negative environment that still questions if this is even necessary and points out the immature tooling and relatively small market share. If you pay close attention, you can hear the oldtimers shouting...

Why, why, why...

We already have consoles, and 3D on the desktop. Why do we need 3D on the browser?

Putting other arguments aside, having a network-first 3D environment opens up possibilities that are required, if we are to upgrade 3D worlds from static experiences on rails to dynamic, unpredictable environments. Like in real life, virtual reality needs to feel out of our control, if it will ever be believable.

Closed ecosystems favor a small group of people and any innovation is driven by private funding. The Web wouldn't be where it is today if it was running on proprietary software. Everyone understands the benefits of the Internet - we can only imagine how it will broaden the horizon for 3D if we apply the same openness...

Granted; but markup, seriously?... We need markup so developing in 3D doesn't seem a different body of work. The reasons that markup exists for flat 2D publishing are all valid for 3D publishing.

Bold predictions for a better future

A standardized 3D API that uses markup sounds like wishful thinking for most developers. Any online game created with WebGL today downloads a whole lot of custom libraries. And although this "technique" can be achieved with polyfill libraries and its concept has been brought to the forefront by various different frameworks, it is still far from a standard.

If we can agree on the prospect of the idea, lets imagine what needs to happen to make 3D on the browser an open standard. Basically, two things :

  1. Vanilla.js adopts the Three.js API. That should in turn allow the browser vendors to incorporate the API with performance benefits in the execution speed and eliminate the overhead of downloading remotely hosted libraries.
  2. CSS3D uses WebGL in the background, as its rendering engine, while keeping its syntax intact. That way we can have the syntax of CSS and the performance of WebGL. Although CSS3D is performant for simple transformations, having two rendering engines is simply confusing.

An official spec that makes WebGL part of the next HTML standard is merely common sense. Extending that to support basic primitives in markup and 3D attributes in CSS would only enrich the specification.

What's worth observing is that none of these events is necessary to guarantee the existence of WebGL markup. Even today, we can mix markup & CSS with WebGL using shims. It would just be mutually beneficial, for both the public and the developers, if we didn't have to use shims.

This evolution agrees with other advancements of the Web platform:

  • Custom tags using web components have become a standard
  • Binary JSON can be used as a delivery 3d format
  • HTTP2 favors the parallel downloading of many files
  • IndexedDB allows local caching of blob data

All signs show 3D in the browser will be as common as regular 2D visuals. We should stop treating the browser as a digital newspaper, rather a universal viewport anything can be represented in a declarative way.

The browser is the engine

Moving forward, 3D should be owned by the browser vendors. They need to agree on a uniform API that will extend markup tags and CSS to utilize WebGL; and thus establish a performant I/O throughput with the GPU.

"It is amazing what you can accomplish if you do not care who gets the credit." ― Harry S. Truman

This is how we should be thinking right now. A 3D API for the Web is bigger than any one individual or interested party. Just go ahead and make it happen; for the sake of standardizing declarative programming in 3D and integrating it with the rest of the Web APIs. Stop thinking about capitalizing on innovation and start making a difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment