Skip to content

Instantly share code, notes, and snippets.

@alanjfs
Last active April 18, 2021 12:45
Show Gist options
  • Save alanjfs/fb0a7f78d72456df993cf627c39056ee to your computer and use it in GitHub Desktop.
Save alanjfs/fb0a7f78d72456df993cf627c39056ee to your computer and use it in GitHub Desktop.
Magnum and ECS - Part II

Magnum ECS Example - Part II

Continuing along the path of Part I, to try and accommodate for most requirements of a CAD-like application using Magnum and data-orientation with EnTT.


Result

Here's what the code below currently looks like.

ecs3


Build

No changes here.


Changes

  • No mode Mesh::draw Meshes are now plain-old-data, actual logic being implemented by a system. (see RenderSystem)
  • Global registry I think it can make sense to have many, but in this case there was only one, and this saves us from passing the registry to each system (see gRegistry)
  • Two-stage initialisation via "Template" and "Instance" components. The result is unhindered creation of instances and components, to have interactions with the GPU - such as compiling shaders and uploading vertices - deferred until later, to happen in bulk (see SpawnSystem)

Problems

  • Mesh::draw This was the main one, but it isn't perfect. There's still a lot of data in Mesh that isn't being used. Like state tracking to avoid duplicate calls to glUseProgram; but given we only ever render ourselves, we should also be free from this state. But if we aren't using it, it's a waste, in particular to the memory layout which remains intermixed with this state and the draw() function itself, along with lots and lots of other methods. Preferably, Mesh should contain just the buffers and layout. I expect I'll need to implement a new Mesh class (struct, rather, without methods and state) rather than try and contort this class to do my bidding

Lingering Questions

This works rather well, but there are things left I haven't been able to wrap my head around just yet.

  • How does a camera fit into this?
    • That is, do we need a custom Orientation component, e.g. CameraOrientation or would it suffice to have a Camera "tag" on the entity representing it? And how do we make rendering relative a camera? We need some sense of what camera is "current" or related to any particular view; in case there are multiple such as top, bottom, front and perspective
  • How does shadows fit into this?
    • In the case of shadow maps, we need a (set of) draw call(s) to happen prior to drawing what is currently drawn. Is it simply a matter of another system, and ordering the calls to each system accordingly? Do we need to establish some form of relationship between systems, which is dependent on which (and for what aspect of it)?
  • How does picking fit into this?
    • Like shadows, an ID pass needs to be rendered in parallel with colors/shadows, with its own shader (taking only the ID color into account) and independently from the rest; i.e. the output isn't relevant outside of calling glReadPixels during a picking session.
  • How does selection fit into this?
    • That is, once something has been picked, I'd like for a highlight of sorts to get drawn around the object. Do I add a transient Selected component, and filter entities by that, in a separate render system?
/**
* @brief An ECS example with Magnum, part 2
*
* # What's changed?
*
* - **Global registry** I think it can make sense to have many, but in this case
* there was only one, and this saves us from passing the registry to each system
* - **No mode Mesh::draw** Meshes are now plain-old-data, actual logic being
* implemented by a system.
*
* # What's novel?
*
* - Components are pure types, e.g. Vector3, as opposed to one wrapped in a struct
* Except when otherwise necessary, such as the `ShaderTemplate` component
* - Two-stage initialisation via "Template" and "Instance", to avoid making calls to GPU
* during instantiation of lots of data. Effectively deferring uploads to a bulk operation
* happening during "spawn" (see SpawnSystem)
* - Shaders contain references to things to draw, as opposed to the (default) other way around
* See `ShaderAssignment` component
*
* # Troubles
*
* - I wanted to `using MeshTemplate = MeshTools::MeshData3D` for a component,
* but wasn't able to. Something about `position` being empty once being
* read from within a system.
* - `MeshTools::transformPointsInPlace` isn't working, unsure why.
* Compilation couldn't find its declaration.
*
* # Questions
*
* - How does a camera fit into this?
* That is, do we need a custom Orientation component, e.g. CameraOrientation
* or would it suffice to have a `Camera` "tag" on the entity representing it?
* And how do we make rendering relative a camera? We need some sense of what
* camera is "current" or related to any particular view; in case there are multiple
* such as top, bottom, front and perspective
* - How does shadows fit into this?
* In the case of shadow maps, we need a (set of) draw call(s) to happen prior
* to drawing what is currently drawn. Is it simply a matter of another system,
* and ordering the calls to each system accordingly? Do we need to establish
* some form of relationship between systems, which is dependent on which (and
* for what aspect of it)?
* - How does picking fit into this?
* Like shadows, an ID pass needs to be rendered in parallel with colors/shadows,
* with its own shader (taking only the ID color into account) and independently
* from the rest; i.e. the output isn't relevant outside of calling glReadPixels
* during a picking session.
* - How does *selection* fit into this?
* That is, once something has been picked, I'd like for a highlight of sorts
* to get drawn around the object. Do I add a transient `Selected` component,
* and filter entities by that, in a separate render system?
*
*/
#include <Magnum/GL/DefaultFramebuffer.h>
#include <Magnum/GL/Mesh.h>
#include <Magnum/GL/Renderer.h>
#include <Magnum/MeshTools/Compile.h>
#include <Magnum/Platform/Sdl2Application.h>
#include <Magnum/Primitives/Cube.h>
#include <Magnum/Primitives/Icosphere.h>
#include <Magnum/Shaders/Phong.h>
#include <Magnum/Trade/MeshData3D.h>
#include <Magnum/Math/Quaternion.h>
#include "externals/entt.hpp"
namespace Magnum { namespace Examples {
// "There can be only one" - Highlander, 1986
static entt::registry gRegistry;
// --------------------------------------------------------------
//
// Components
//
// --------------------------------------------------------------
using Orientation = Quaternion;
using Color = Color3;
// NOTE: "using" isn't enough here, EnTT requires unique types per component
struct Position : public Vector3 { using Vector3::Vector3; };
struct Scale : public Vector3 { using Vector3::Vector3; };
/** @brief Name, useful for debugging. E.g. to print an entity's name
*/
using Identity = std::string;
/** @brief Template for the creation of a mesh
*/
struct MeshTemplate {
enum { Cube, Sphere, Capsule, Plane } type;
Vector3 extents;
};
/** @brief Compiled and uploaded mesh
*
* Including vertex and index buffer, and vertex layout information.
*
*/
using MeshInstance = GL::Mesh;
/** @brief Template for the compile and linking of a new shader program
*/
struct ShaderTemplate {
std::string type;
};
/** @brief Compiled and linked shader program
*/
using ShaderInstance = Shaders::Phong;
// Connection between drawable entities and a shader entity
using ShaderAssignment = std::vector<entt::registry::entity_type>;
// ---------------------------------------------------------
//
// Systems
//
// ---------------------------------------------------------
/** @brief Affect *all* orientations with the mouse
*
* NOTE: This should probably be more targeted; e.g. affecting only a "camera"
*
*/
static void MouseMoveSystem(Vector2 distance) {
gRegistry.view<Orientation>().each([distance](auto& ori) {
ori = (
Quaternion::rotation(Rad{ distance.y() }, Vector3(1.0f, 0, 0)) *
ori *
Quaternion::rotation(Rad{ distance.x() }, Vector3(0, 1.0f, 0))
).normalized();
});
}
/**
* @brief Shift all colors on mouse release
*
* NOTE: Like the above, this should probably be more targeted; using ECS "tags"?
*
*/
static void MouseReleaseSystem() {
gRegistry.view<Color>().each([](auto& color) {
color = Color3::fromHsv({ color.hue() + 50.0_degf, 1.0f, 1.0f });
});
}
static void AnimationSystem() {
Debug() << "Animating..";
}
static void PhysicsSystem() {
Debug() << "Simulating..";
}
/**
* @brief Upload new data to the GPU
*
* Whenever a new item spawns, it'll carry data pending an upload
* to the GPU. A spawned component may be replaced by assigning
* a new template to an entity.
*
*/
static void SpawnSystem() {
gRegistry.view<Identity, ShaderTemplate>().each([](auto entity, auto& id, auto& tmpl) {
Debug() << "Instantiating shader from template for:" << id;
gRegistry.remove<ShaderTemplate>(entity);
gRegistry.assign_or_replace<ShaderInstance>(
entity,
// Only one option, for now
tmpl.type == "phong" ? Shaders::Phong{}
: Shaders::Phong{}
);
});
gRegistry.view<Identity, MeshTemplate>().each([](auto entity, auto& id, auto& tmpl) {
Debug() << "Instantiating mesh from template for:" << id;
auto data = tmpl.type == MeshTemplate::Cube ? Primitives::cubeSolid() :
MeshTemplate::Sphere ? Primitives::icosphereSolid(3) :
Primitives::icosphereSolid(3);
// NOTE: The below isn't working
// NOTE: Cannot find `transformPointsInPlace`
// Matrix4 transformation = Matrix4::scaling(tmpl.extents);
//MeshTools::transformPointsInPlace(transformation, data.positions(0));
//MeshTools::transformVectorsInPlace(transformation, data.normals(0));
gRegistry.remove<MeshTemplate>(entity);
gRegistry.assign_or_replace<MeshInstance>(entity, MeshTools::compile(data));
});
}
/**
* @brief Facilitate new templates being added for either shaders or meshes
*
*/
static void ChangeSystem() {}
/**
* @brief Destroy entities with a `Destroyed` component
*
*/
static void CleanupSystem() {}
/**
* @brief Produce pixels by calling on the uploaded shader
*
* Meshes are drawn per-shader. That is, a shader is associated to multiple renderables
*
*/
static void RenderSystem(Vector2i viewport, Matrix4 projection) {
GL::defaultFramebuffer.clear(GL::FramebufferClear::Color | GL::FramebufferClear::Depth);
GL::defaultFramebuffer.setViewport({{}, viewport});
GL::Renderer::enable(GL::Renderer::Feature::DepthTest);
Debug() << "Drawing..";
gRegistry.view<Identity, ShaderAssignment, ShaderInstance>().each([projection](auto& id, auto& assignment, auto& shader) {
Debug() << " ..using shader:" << id;
// NOTE: Doing double-duty; calls to `shader.set*` also call on `glUseProgram`
// ..except it shouldn't have to.
glUseProgram(shader.id());
// Uniforms applicable to *all* assigned meshes
shader
.setLightPosition({7.0f, 7.0f, 2.5f})
.setLightColor(Color3{1.0f})
.setProjectionMatrix(projection);
for (auto& entity : assignment) {
// NOTE: Looping through entities from within a loop of components
// Is this a good idea? What is the alternative?
const auto& [id, pos, ori, scale, color, mesh] = gRegistry.get<
Identity, Position, Orientation, Scale, Color, MeshInstance
>(entity);
Debug() << " - " << id;
auto transform = (
Matrix4::scaling(scale) *
Matrix4::rotation(ori.angle(), ori.axis().normalized()) *
Matrix4::translation(pos)
);
shader
.setDiffuseColor(color)
.setAmbientColor(Color3::fromHsv({color.hue(), 1.0f, 0.3f}))
.setTransformationMatrix(transform)
.setNormalMatrix(transform.rotationScaling());
// NOTE: Assumes indexed draw, which is fine for this example
glBindVertexArray(mesh.id());
glDrawElements(GLenum(mesh.primitive()),
mesh.count(),
GLenum(mesh.indexType()),
reinterpret_cast<GLvoid*>(nullptr));
glBindVertexArray(0);
}
});
}
// ---------------------------------------------------------
//
// Application
//
// ---------------------------------------------------------
using namespace Magnum::Math::Literals;
class ECSExample : public Platform::Application {
public:
explicit ECSExample(const Arguments& arguments);
~ECSExample();
private:
void drawEvent() override;
void mousePressEvent(MouseEvent& event) override;
void mouseReleaseEvent(MouseEvent& event) override;
void mouseMoveEvent(MouseMoveEvent& event) override;
Matrix4 _projection;
Vector2i _previousMousePosition;
};
ECSExample::~ECSExample() {
// Let go of all references to components
//
// If we don't do this, gRegistry is destroyed *after* the application,
// which means after the OpenGL context has been freed, resulting in shaders
// being unable to clean themselves up.
gRegistry.reset();
}
ECSExample::ECSExample(const Arguments& arguments) :
Platform::Application{ arguments, Configuration{}
.setTitle("Magnum ECS Example") }
{
GL::Renderer::enable(GL::Renderer::Feature::DepthTest);
GL::Renderer::enable(GL::Renderer::Feature::FaceCulling);
_projection =
Matrix4::perspectiveProjection(
35.0_degf, Vector2{ windowSize() }.aspectRatio(), 0.01f, 100.0f) *
Matrix4::translation(Vector3::zAxis(-10.0f));
auto box = gRegistry.create();
auto sphere = gRegistry.create();
auto phong = gRegistry.create();
// Box
gRegistry.assign<Identity>(box, "box");
gRegistry.assign<Position>(box, Vector3{});
gRegistry.assign<Orientation>(box, Quaternion::rotation(5.0_degf, Vector3(0, 1.0f, 0)));
gRegistry.assign<Scale>(box, Vector3{1.0f, 1.0f, 1.0f});
gRegistry.assign<Color>(box, Color3{ 0.1f, 0.6f, 0.8f });
gRegistry.assign<MeshTemplate>(box, MeshTemplate::Cube, Vector3(2.0f, 0.5f, 2.0f));
// Sphere
gRegistry.assign<Identity>(sphere, "sphere");
gRegistry.assign<Position>(sphere, Vector3{});
gRegistry.assign<Orientation>(sphere, Quaternion::rotation(5.0_degf, Vector3(0, 1.0f, 0)));
gRegistry.assign<Scale>(sphere, Vector3{1.2f, 1.2f, 1.2f});
gRegistry.assign<Color>(sphere, Color3{ 0.9f, 0.6f, 0.2f });
gRegistry.assign<MeshTemplate>(sphere, MeshTemplate::Sphere, Vector3(1.0f, 1.0f, 1.0f));
// Phong
gRegistry.assign<Identity>(phong, "phong");
gRegistry.assign<ShaderTemplate>(phong);
// Connect vertex buffers to shader program
// Called on changes to assignment, e.g. a new torus is assigned this shader
gRegistry.assign<ShaderAssignment>(phong, std::vector<entt::registry::entity_type>{box, sphere});
}
void ECSExample::drawEvent() {
GL::defaultFramebuffer.clear(
GL::FramebufferClear::Color | GL::FramebufferClear::Depth
);
auto viewport = GL::defaultFramebuffer.viewport().size();
SpawnSystem();
AnimationSystem();
PhysicsSystem();
RenderSystem(viewport, _projection);
CleanupSystem();
swapBuffers();
}
void ECSExample::mousePressEvent(MouseEvent& event) {
if (event.button() != MouseEvent::Button::Left) return;
_previousMousePosition = event.position();
event.setAccepted();
}
void ECSExample::mouseReleaseEvent(MouseEvent& event) {
if (event.button() != MouseEvent::Button::Left) return;
// Should the system handle all mouse events, instead of individual ones?
MouseReleaseSystem();
event.setAccepted();
redraw();
}
void ECSExample::mouseMoveEvent(MouseMoveEvent& event) {
if (!(event.buttons() & MouseMoveEvent::Button::Left)) return;
const float sensitivity = 3.0f;
const Vector2 distance = (
Vector2{ event.position() - _previousMousePosition } /
Vector2{ GL::defaultFramebuffer.viewport().size() }
) * sensitivity;
// Should the system compute delta?
// If so, where does state go, i.e. _previousMousePosition?
MouseMoveSystem(distance);
_previousMousePosition = event.position();
event.setAccepted();
redraw();
}
}}
MAGNUM_APPLICATION_MAIN(Magnum::Examples::ECSExample)

This next section is notes I've made during my research. I try and make assertions of things I think I understand, and expect some of these to be incorrect; if you spot any, poke me.


Template Components

Yesterday I discovered "template" components. Something a loop could pick up on as a hint to instantiate another component, on the entity carrying the template.

I think there's more to gain here, such as an "edit" or "modify" component. Something picked up by a EditSystem to perform some modification to an existing component.

For example.

  1. User creates a sphere; it is spawned and rendered each frame
  2. User edits the subdivision attribute of the icosphereSolid(), which attaches a AttributeEdit component to the entity.
  3. On the next run, the EditSystem applies the edit to the entity; likely by removing the MeshInstance and creating it anew.

I suppose in this case, it would make sense to keep the original MeshTemplate component around, as the one providing these attributes to begin with, and to re-generate the resulting MeshInstance.

Also "template" isn't the best name, given C++ occupies that term for its language feature.

  • Motif
  • Blueprint
  • Sample
  • Cookiecutter

Render Pipeline

1h later (07:00)

I had a closer look at bs::framework on how they do rendering and found something interesting.

In a nutshell, they've defined a "non-programmable" pipeline of three stages.

  • Depth
  • Rasterisation
  • Blending

Presumably, this is what you perform each time, with one or more enabled at a time. I presume it's "non-programmable" because it's effectively implemented as a single method/function; which is ok. The interesting bit here is that Rasterisation is considered a pipeline step. I would have thought "depth" was also rasterisation, and for "blending" to be a property of each draw call. But here, it's generalised into having rasterisation involve any mesh.

That immediately opened up the question of "ok, but what about curves?".

I cracked open Banshee, the editor from which this framework was made, to get some idea of what the answer to this question might be.

Exhibit A

Here you can get some idea of how they've handled curves. See those minor gaps in between the pixels of the wireframe? That may of course just be aliasing.

banshee1

Exhibit B

Except aha, see. Even bigger gaps. Clearly this isn't curves, this is curve-like geometry. Quads.

banshee2

So this makes sense; if your pipeline doesn't allow for meshes to be drawn as curves that likely also means there's no room for an alternate index buffer for meshes so as to draw actual quads. That in itself is evident here, given that the wireframe of the box - which should really be quads - is drawn as triangles. Likely achieved via glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); rather than actually drawing a VAO with a differently bound index buffer.

With this in mind, I wouldn't dismiss their idea entirely, but it likely isn't suited for my (or even their) purposes as-is. But I'm getting a clearer idea of how this can work. There appears to be a need for an entirely separate render pipeline, just for curves.


Curve Pipeline

If this pipeline is separate, then that opens the question; what about objects with both curve and mesh?

image

My guess is, this is immediate mode. In which case, it's as easy as..

setColor(...);
drawLine(...);
drawTriangleStrip(...);
// Done

With shaders, this becomes significantly more complex.

loadLineShader();
compileLineShader();
checkErrors();
useShader();
createVertexBuffer();
createIndexBuffer();
createVertexBufferObject();
bindVertexBuffer();
setAttributes();
bindIndexBuffer();
bindVertexArrayObject();
releaseVertexBuffer();
releaseIndexBuffer();
releaseVertexBufferObject();
drawLines();

// Repeat for solid surface drawing

Not to mention the actual shaders, which are files to load off of disk, followed by the actual vertices and their indexes. Madness.

So, ok. Would it then make sense to have the ECS handle these separately?

void LineRenderSystem();
void SolidRenderSystem();

In which case, it doesn't matter whether an "object" has got both a line and a solid, they would be two separate components anyway. Handled by separate systems? Shaders would still be a SoA, containing references to entities to render. Both the line and solid shader containing references to the same entities; in this case, to the translate manipulator entity, and just handle different components.

auto translateManip = registry.create();
registry.assign<Lines>(translateManip, {
    
    // Cross
    0.0f, -1.0f, 0.0f,
    0.0f,  1.0f, 0.0f,

    -1.0f, 0.0f, 0.0f,
     1.0f, 0.0f, 0.0f
})

That kind of breaks the encapsulation of geometry though.

The way the Game Engine Architecture book describes this pipeline is interesting. They effectively recommend an immediate-mode API for drawing "debug" data.

class DebugDrawManager 
{
public:

    // Adds a line segment to the debug drawing queue.
    void AddLine(const Point& fromPosition,
                 const Point& toPosition,
                 Color color,
                 float lineWidth = 1.0f,
                 float duration = 0.0f,
                 bool depthEnabled = true);

    // Adds an axis-aligned cross (3 lines converging at // a point) to the debug drawing queue.
    void AddCross(const Point& position,
                  Color color,
                  float size,
                  float duration = 0.0f,
                  bool depthEnabled = true);

    ...

    // Adds a text string to the debug drawing queue.
    void AddString(const Point& pos,
                   const char* text,
                   Color color,
                   float duration = 0.0f,
                   bool depthEnabled = true);

Game Engine Architecture, p.423

Points of interest:

  1. The API suggests an immediate-mode style of invokation
  2. The API is limited, complex shapes like double-crosses or bespoke line strips would need to be made by combining many simpler calls
  3. There's a global "manager" for what they refer to as "debug" drawing; this implies the only drawing made with lines and text are used for debugging (i.e. no elements visible to the player consists of curves)?
  4. There doesn't appear to be any need for custom shaders; a limited set of "uniforms" are handed to the method itself, like color and lineWidth.
  5. There's mention of a "drawing queue" which suggests these aren't immediate-mode under the hood, but only appears as such
  6. Text is drawn like this as well.

Many Pipelines

So far, I've learnt that there isn't just one pipeline to pass data through, but many. Each one requiring dedicated setup and teardown, some forming a kind of hierarchy; like how a pass references multiple GPU programs, each GPU program referencing a unique set of surfaces and so forth.

Pipeline of Pipelines

From the creation of new data to final pixel on screen.

 _____    _____________    __________    ________    __________          |\\
|     |  |             |  |          |  |        |  |          |         | ||
| ACP |--> Framebuffer |--> Viewport |--> Passes |--> Programs |-------->| ||
|_____|  |_____________|  |__________|  |________|  |__________|         | ||
        _____|__  __|_____                       _____|__  __|_____       \||
       |        ||        |                     |        ||        |
       | Buffer || Buffer |                     | Shader || Shader |    screen
       |________||________|                     |________||________|

Pipeline 1 - Asset Conditioning Pipeline

At the highest level, the origin of data; human labour. This is where vertices and shaders are authored, including any attributes like normals and colors, to be later loaded by an application.

When to use Optional
Starting point for anything to get drawn on screen No. Data has to come from somewhere, even if that data is defined in code alongside the application itself

image

Game Engine Architecture, p491


Pipeline 2 - Framebuffer

A collection of render buffers, or render "targets" (see below). A target is anywhere pixels are written by a fragment shader (?), most predominantly the color target.

When to use Optional
On drawing anything to screen Yes. Provided for you via the "Default Framebuffer"
On performing offscreen

OpenGL has 2 types of framebuffers.

  • Default Framebuffer
  • User-defined Framebuffer, a.k.a. Framebuffer Object

The "Default Framebuffer" is implicitly created alongside the context, which implies a call to set the "current" framebuffer is made like this.

Also sometimes referred to as a "render target"


Pipeline 3 - Render Buffer

A Framebuffer contains one or more Render Buffers.

It's an indidual array of pixels, such as color and depth, also referred to as "attachment".

When to use Optional
On choosing whether to draw color, depth or stencil Yes, provided as GL_COLOR_ATTACHMENT0 via the "Default Framebuffer"

Clearing a render buffer is done via a combination of..

  • Stencil
  • Depth
  • Color

Buffers in FBOs are also called "attachment points"; they're the locations [in a framebuffer] where images can be attached. - OpenGL Wiki

glDrawBuffer(GL_COLOR_ATTACHMENT0);

// Or
GLenum bufs[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, bufs);

Pipeline 4 - Buffering

Rendering is typically "double buffered". Which means actual drawing happens to an offscreen array of pixels, only to be copied to an onscreen array on completion. These are called back- and front-buffer respectively

When to use Optional
For VSync Yes, drawing directly to the front (visible) buffer is possible, but uncommon

Pipeline 5 - Viewports

Viewports represent subsections of a render target. For example, if the render target is a visible surface in your window, a viewport is a subdivision of that surface.

When to use Optional
When rendering into a subset of a visible image Yes, provided by the "Default Framebuffer"
When rendering into a subset of a Framebuffer Object

(0, 0)
      __________________________
     |  (200, 0) __________     |
     |          |          |    |
     |          | Viewport |    |
     |          |__________|    |
     |                          |
     |                  (300, 100)
     | Surface                  |
     |__________________________|
                                 (400, 300)

Viewports is mostly (only?) used as a means of optimising a draw call to a subset of the full render target, and can be used to e.g. parallelise rendering of a full image by rendering multiple viewports in parallel of a target subdivided into a grid.


Pipeline 6 - Passes

Drawing is typically additive.

When to use Optional
When drawing more than just a triangle Yes, except for anything interesting

That is, at the start of a new image, the current image (in whatever state it was in, such as uninitialised memory or a previous image) is cleared. From there, each draw call (typically) adds to the resulting image, until the image is complete. The simplest way to think about this is the Painter's Algorithm which is akin to how traditional painters approach painting; from background to foreground. Each stroke overlapping a previous.

There are a few reasons for this.

  1. Conceptually, it makes sense to think of drawing the background separately from the foreground; maybe two separate artists are working on these independently
  2. A draw call is limited in how vertices are interpreted by the rasterization process; for example, does two vertices represent a GL_LINES, or does three vertices represent a GL_TRIANGLES?
  3. Some draw calls depend on data produced by prior a draw call, such as a shadow map. Being generated from an entirely different point-of-view, using different shaders and subset of geometry.
  4. The program actually drawing pixels are limited in what program they run and what data is bound to the global state at that time. For example, there may only ever be 8 (minimum) attachments to any given render buffer, so if you need more you need a draw call per 8 attachments.
  5. Because of the aforementioned limitation, a rendering technique was developed known as "Deferred Rendering" in which color was drawn separate from light; light being dependent on the color (along with position, normal and depth).
  6. Some draw calls do not affect the visual output at all, but are used for other purposes, such as picking.
                              __________
                             |          |
                          .--> Depth    |----.               
                          |  |__________|    |               
                          |   __________     |               
                          |  |          |    |                        o picking
                          |--> Diffuse  |-.  |     _______                .
                          |  |__________|  | |    |       |               .
 ____________             |   __________   | `---->       |   _______    ____
|            |  deferred  |  |          |  `------>       |  |       |  |    |
| Shadow Map |------------|--> Position |---------> Light |--> Debug |--> ID |
|____________|            |  |__________|  .------>       |  |_______|  |____|
                          |   __________   |      |_______|      .    
                          |  |          |  |                     .
                          `--> Normal   |--`                     .
                             |__________|                 o visualisation
                                                          o manipulators
                                                          o wireframe/selection
                       

Pipeline 7 - GPU Programs

Finally, the lowest of low-level graphics programming, the program running on the GPU itself. These are invoked once per draw-call, e.g. glDrawElements();, and can be thought of as a function call, whereby arguments are its inputs and the return value is its output.

Pixels render(Buffer vertices, Buffer indexes, Buffer uniforms);
When to use Optional
Whenever you want to invoke the GPU to do anything No, mostly

There are 5 stages involved, not including compute, most of which are optional and unused.

 ________    ______    ________    __________    __________         |\\
|        |  |      |  |        |  |          |  |          |        | ||
| Vertex |--> Hull |--> Domain |--> Geometry |--> Fragment |------->| ||
|________|  |______|  |________|  |__________|  |__________|        | ||
                                                                     \||
            |           optional             |                           screen
            |<------------------------------>|                              
@alanjfs
Copy link
Author

alanjfs commented Oct 7, 2019

Thanks guys!

Did you #include <Magnum/MeshTools/Transform.h>? :)

Yes. :( I may be using an older version, 2019.01 from VcPkg.

This data structure is scheduled for a rewrite.

Cool, good to know.

The external SDL dependency is the main pain point, the rest is fiddling with CMake subprojects. I'm working on improving that (and incorporating all your feedback) right now.

I get that distributing C++ libraries is tricky in general; coming from a Python background where any library can be made accessible as pip install my_library, anything involving compilers and cloning git repos is a little overwhelming. I mentioned this somewhere before, but so far the simplest mechanism I've encountered (in my ~2 months of C++, so grains of salt and all that) has been bs::framework, and their downloadable zip file per platform. Even VcPkg didn't cut it; because it doesn't distribute binaries. It builds it "live" which still leaves a lot of requirements on part of the user. And I really didn't like its "magic" hooks into things like MSVC; where it somehow magically finds includes and libraries without me specifying them. Makes it tricky to try and understand what's going on.

It seems to me that whatever platform you are on, a new user is looking for:

  1. What to include
  2. What to link

In an ideal world, I could download a folder with an include/ and lib/ in it and call it a day. And worry about optimising that later, once I'm hooked and committed.

In Part I and here as well I see you're practically on the way to ditch Magnum::GL and reimplementing it fully yourself. Well, nothing wrong with that I'd say, but there's a lot of work to be done :)

I'd rather not though.

proved to be very slow on particular platforms ... and sooner or later you'll run into "my app is drawing just black on WebGL but not on desktop" and other things exactly because of this state tracker misuse.

Ah.. yes that does makes sense.. I hadn't considered other platforms.

The above is not true, it'll do it just once when first needed ... unless you side-step the state tracker by manually calling glUseProgram() (which is what you're doing).

Ah, yes sorry I meant in this particular loop, that call was superflous, not in Magnum in general.

But -- for a proper DOD-like drawing, have a look at a MeshView, which is really just "a struct" -- ideally there should be just a small number of Mesh instances that define the vertex layout and buffer assignment for more than one object and then multiple views that render portions of these

That does sound promising, will investigate this one.

Ideally, all shaders should be compiled upfront ... and mesh data uploaded to the GPU and discarded CPU-side (because otherwise you'll be using twice as much memory).

Good point! But that's what's happening here as well I think, with the MeshTemplate (and ShaderTemplate) being discarded once converted to their corresponding instances. I haven't double-checked that they actually get cleaned up, whether there's some reference counting going on to do that automatically once it doesn't have an owner.

@mosra
Copy link

mosra commented Oct 11, 2019

(the usual apology for taking centuries to reply -- sorry)

I may be using an older version, 2019.01 from VcPkg.

transformPointsInPlace() is there for quite a few years already, you shouldn't have any issues with that. Ping me on Gitter if you're still struggling with this part.

I get that distributing C++ libraries is tricky in general

In an ideal world, I could download a folder with an include/ and lib/ in it and call it a day. And worry about optimising that later, once I'm hooked and committed.

That's planned for Windows (macOS and Linux fortunately have easy-to-use packaging systems already), but so far I didn't find time to try setting up a CI to produce nightly and release builds. Might try the new GitHub Actions for that. I believed strongly in vcpkg at first, but damn it jut does way too much and is soo brittle.

These days I'm working on getting the CMake subproject setup more convenient to use, while it makes you tied to CMake, this could be the single reliable cross-platform way to do things (even though basically requiring you to build everything). Last blocker is SDL, I'll see how easy/hard it is to use as a CMake subproject, maybe switching to GlfwApplication instead if that will make the setup simpler.

@alanjfs
Copy link
Author

alanjfs commented Oct 11, 2019

Hey, no worries. Thanks for getting back. :)

Ping me on Gitter if you're still struggling with this part.

It's still happening, I'll see if I can narrow it down next time.

Might try the new GitHub Actions for that. I believed strongly in vcpkg at first, but damn it jut does way too much and is soo brittle.

I also really liked VcPkg initially, from the user point of view. But it's been too magical so far, so I typically copy the results into my project and set the paths explicitly, like with Magnum. That's worked fine so far, with the only exception being that one extra include path we chatted about, the khrplatform.h.

Have you considered any other of the hip distribution mechanisms, like Conan? Haven't tried those yet, but I think there are some that do both compile-on-use, and binary distributions, like PyPI does.

@mosra
Copy link

mosra commented Oct 11, 2019

Conan -- see mosra/magnum#304, not sure why exactly but somehow the restrictions made by Conan and the requirements from Magnum don't play well together or something. There's been some progress recently but not much.

The khrplatform thing is getting fixed right now (well, and the last 8 hours, trial-and-error on the CIs), thanks for pushing me to do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment