Skip to content

Instantly share code, notes, and snippets.

@velipso
Last active September 25, 2023 21:17
Show Gist options
  • Save velipso/4c1155aa7e317cf7236f5c7952057f3d to your computer and use it in GitHub Desktop.
Save velipso/4c1155aa7e317cf7236f5c7952057f3d to your computer and use it in GitHub Desktop.
SDL2 game loop forcing a certain frame rate
//////////////////////////////////////////////////
//
// EDIT: Warning, this doesn't seem to work well.
//
//////////////////////////////////////////////////
// public domain
// Let's talk about game loops - it looks a bit hairy, but there is a method to this madness.
//
// Our general strategy is to try to enforce 60 thinks per second and as many frames per second
// as possible (up to 60).
//
// We do this by using a sliding window of 60 timestamps. These timestamps record when the
// previous 60 thinks happened.
//
// This means when we record a new think has happened, we can see how long the previous 60
// thinks took. Note that this number isn't stored in seconds -- we are looking at raw counter
// values.
//
// Therefore, we want our 60 thinks to take the amount in sec (SDL_GetPerformanceFrequency). We
// can calculate how far off we are.
//
// If we are too slow, then we skip some frames.
//
// If we are too fast, then we use SDL_Delay to slow down.
//
// The next complication is: how much do we SDL_Delay by? We are at the mercy of the operating
// system, and SDL_Delay makes no guarantees about accuracy.
//
// Therefore, we sample SDL_Delay's *actual* delay. We time how long SDL_Delay takes for the
// requested delay of 0ms to 7ms.
//
// Once we have a baseline measurement, we can calculate which delay to use to take up a certain
// amount of count ticks. When we delay again, we measure it so that we can graudally update
// our estimated prediction for that particular delay amount.
//
// As a last resort, we busy wait until the right amount has elapsed.
#define TGT_FPS 60
uint64_t sec = SDL_GetPerformanceFrequency();
double secinv = 1.0 / sec;
uint64_t now, delta, now2, delta2;
int d;
// time the actual resolution of SDL_Delay(0..7)
uint64_t delay_guess[8] = {0};
for (d = 0; d < 8; d++){
now = SDL_GetPerformanceCounter();
SDL_Delay(d);
delay_guess[d] = SDL_GetPerformanceCounter() - now;
}
uint64_t timestamp[TGT_FPS] = {0}; // sliding timestamp window
int ts = 0;
int skip_frame = 0;
int think_per_sec = 0;
int frame_per_sec = 0;
uint64_t one_frame = (uint64_t)(sec / (double)TGT_FPS);
uint64_t next_sec = now + sec;
while (true){
// think
event_pump(); // pump all the SDL events
game_think(); // process input and update game state
think_per_sec++;
// render
if (skip_frame > 0)
skip_frame--;
if (skip_frame == 0){
game_frame(); // render a frame
SDL_GL_SwapWindow(win);
frame_per_sec++;
}
// enforce TGT_FPS
now = SDL_GetPerformanceCounter();
delta = now - timestamp[ts];
if (delta > sec){
// too slow, so try to skip some frames if we can
// if this isn't our first loop, and we aren't skipping frames, then we should skip some
if (timestamp[ts] > 0 && skip_frame == 0){
skip_frame = (int)((delta - sec) / one_frame) + 1;
if (skip_frame > 4)
skip_frame = 4;
}
}
else{
// we were too fast, so delay a little bit to slow down
delta = sec - delta; // delta now stores how much time we have to burn
delay_more:
if (delta >= delay_guess[0]){
// unraveled binary search for best delay length
d = delta >= delay_guess[4] ?
(delta >= delay_guess[6] ?
(delta >= delay_guess[7] ? 7 : 6) :
(delta >= delay_guess[5] ? 5 : 4)) :
(delta >= delay_guess[2] ?
(delta >= delay_guess[3] ? 3 : 2) :
(delta >= delay_guess[1] ? 1 : 0));
// perform the delay
SDL_Delay(d);
// measure the actual change
now2 = SDL_GetPerformanceCounter();
delta2 = now2 - now;
now = now2;
// update our guess by inching our way towards new measurement
// new_guess = (old_guess * 15 + delta) / 16
delay_guess[d] = ((delay_guess[d] << 4) - delay_guess[d] + delta2) >> 4;
// subtract actual change from delta
if (delta2 >= delta)
delta = 0;
else{
delta -= delta2;
goto delay_more;
}
}
if (delta > 0){
// busy wait the rest
now2 = now + delta;
while (now2 > now)
now = SDL_GetPerformanceCounter();
}
}
// record moment
timestamp[ts] = now;
ts = (ts + 1) % TGT_FPS;
if (now >= next_sec){
printf("Thinks per sec: %2d Frames per sec: %2d\n", think_per_sec, frame_per_sec);
think_per_sec = frame_per_sec = 0;
next_sec = now + sec;
}
}
@hunterloftis
Copy link

This might be totally off base / not what you're trying to do; I apologize if it's irrelevant.
But this JavaScript-y pseudocode is how I make deterministic games/sims that render smoothly:

const TICK = 4        // fixed interval between "thinks"
const FRAME = 16      // minimum time between renders

let time = 0
let last = performance.now()
let frame = 0

while (true) {
  const now = performance.now()
  time += now - last
  last = now

  // Ensure that the simulation renders in perfect sync with real time
  // and is perfectly deterministic
  while (time >= TICK) {
    event_pump()
    game_think()
    time -= TICK
  }

  // Render a new frame at 60 fps
  // Without worrying about exactness since the thinking is separated
  if (now >= frame) {
    game_frame()
    frame = now + FRAME
  }
}

@velipso
Copy link
Author

velipso commented Jun 24, 2017

Cool, I think I see what you're doing. It's not exactly what I'm looking for, but it does give me some ideas, so thanks for sharing.

For what I'm doing, the value of TICK and FRAME would be the same -- I want the loop to think at 60 times per second, and (ideally) render at 60 times per second. But if there isn't enough time to do both, I want to start dropping frames to hopefully catch up.

The only other issue I see is that the loop will busy wait until enough time has accumulated, but the same SDL_Delay strategy in the original code could be grafted into it.

One thing I really like is that your code instantly adjusts to fluctuations in performance. I'm not sure which is better -- in the original code, the timestamp window means that if there is one terribly slow frame, it will remember that bad frame for an entire second and try to catch up to where it should be. Is that better? Or should it just quickly forget the terrible frame and move on? I'm not sure. But instant adjustment is a lot easier to understand and cleaner.

Using some of your ideas and getting rid of the timestamp window gave me this basic idea:

var RATE = performance.freq() / 60;
var next_think = performance.get();
while (true){
    var now = performance.get();
    if (now >= next_think){
        var i = 0;
        while (true){
            event_pump();
            game_think();
            next_think += RATE;
            now = performance.get(); // EDIT: added this line
            if (next_think > now)
                break;
            i++;
            if (i >= 4){ // at most 4 thinks before forcing frame
                // we are dropping time in order to catch up, so simulation will run slower than clock time
                next_think = now + RATE; // EDIT: used `now` since it is set to performance.get()
                break;
            }
        }
        game_frame();
    }
    else{
        // we have time to kill, so delay using whatever clever method
        delay(next_think - now);
    }
}

I will likely code a bunch of different game loops and try them out when I have a simulation that I can play with. This has really provoked thought, thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment