Skip to content

Instantly share code, notes, and snippets.

View kebby's full-sized avatar

Tammo 'kb' Hinrichs kebby

View GitHub Profile
@kebby
kebby / ZLibDecoder.cs
Created October 28, 2013 10:31
Inflate (ZLib decoder) implementation in C#, taken from Sean Barrett's stb_image: http://nothings.org/stb_image.c You might wonder why I did this if there has always been DeflateStream which works well enough: I had to learn that DeflateStream isn't always available. One example is the Unity3D engine where you can't use it in web player projects…
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Utility
{
/// <summary>
/// public domain zlib decode
/// original: v0.2 Sean Barrett 2006-11-18
// stuff you should know:
abstract class OutgoingCallBase
{
// ...
}
private readonly ConcurrentDictionary<int, OutgoingCallBase> OutgoingCalls = new ConcurrentDictionary<int, OutgoingCallBase>();
private int CallId;
// Media Foundation h.264 encode test, by Tammo "kb" Hinrichs, 2016-8-31
// This source code file is in the public domain.
// This really is a minimal test, and it only does the raw encoding, no
// color space conversion or muxing whatsoever. So to make it usable you'll
// probably want to put the main loop into its own thread and make it poll
// the rendering thread for new image data instead of the RenderImage() call,
// and you'll probably also want to construct an MF graphfor color space
// conversion/muxing/audio, or call the respective MFTs manually. And get rid
// of the memory buffers and give the encoder your backbuffer surfaces instead.
#version 410 core
uniform float fGlobalTime; // in seconds
uniform vec2 v2Resolution; // viewport resolution (in pixels)
uniform sampler1D texFFT; // towards 0.0 is bass / lower freq, towards 1.0 is higher / treble freq
uniform sampler1D texFFTSmoothed; // this one has longer falloff and less harsh transients
uniform sampler1D texFFTIntegrated; // this is continually increasing
float fMidiKnob;
float time = fGlobalTime;
diff --git components/viz/host/host_display_client.cc components/viz/host/host_display_client.cc
index 3b00759e513d..90fe332d59f5 100644
--- components/viz/host/host_display_client.cc
+++ components/viz/host/host_display_client.cc
@@ -45,9 +45,14 @@ void HostDisplayClient::OnDisplayReceivedCALayerParams(
}
#endif
+void HostDisplayClient::UseProxyOutputDevice(
+ UseProxyOutputDeviceCallback callback) {
@kebby
kebby / properties.hpp
Last active February 19, 2022 21:20
simple macro based system for adding property metadata to
// We want metadata for struct fields so we can have things like automatic serialization,
// validation and UI creation and stuff
struct PropertyDef { const char* name; };
struct Property_int : PropertyDef { int& value; const int deflt = 0, min = INT_MAX, max = INT_MIN; };
struct Property_float : PropertyDef { float& value; const float deflt = 0, min = FLT_MAX, max = FLT_MIN, step=0; };
struct Property_bool : PropertyDef { bool& value; const bool deflt = false; };
struct Property_String : PropertyDef { String& value; String deflt = ""; };
struct Property_Vec2 : PropertyDef { Vec2& value; const Vec2 deflt = Vec2(0), min = Vec2(FLT_MAX), max = Vec2(FLT_MIN); };

How to properly Deadline Three-Dee compo

The anaglyph 3D compo has been a staple of Deadline for years now, and yet often enough there are entries where the 3D effect just doesn't work at the party, and what looked good on the creator's screen resulted in very confusing 3D and double images when watched at the party, or worse - people in the audience got headaches and had to take off their 3D glasses when the entry was on.

Most of this is because of pretty simple physics, and it mostly boils down to the fact that watching a stereoscopic image on a small screen is a very different thing than watching the same thing on a big projection screen.

How do we even see 3D?

Basically, the way the human visual system focuses on different distances is pretty simple. First, the lenses in your eyes need to set the focus to the distance of the object that you're watching, and second both of your eyes have to turn towards that object to put it into the center of both visual fields.