Skip to content

Instantly share code, notes, and snippets.

@dwilliamson
Created December 22, 2021 21:35
Show Gist options
  • Save dwilliamson/21ca24e6c408e13e2479296232801419 to your computer and use it in GitHub Desktop.
Save dwilliamson/21ca24e6c408e13e2479296232801419 to your computer and use it in GitHub Desktop.
Prince of Persia style Character Controller with no Level Markup

Documenting a tiny fraction of the Character Physics/Locomotion/Animation of Advent Shadow for PSP (2004).

The first thing I will say before I start this is; if you can get hold of a guy that can both code and animate stick them in the guts of the implementation and you'll get something that surpasses anything a paired programmer/animator can achieve. Have them responsible for implementing rough first-pass animations and writing the code that drives the animation engine. They might be responsible for final animations here and there, it really depends on your setup. However as far as the code goes, keep them purely on the gameplay side and away from the core technology - you don't want to over-burden them. If you have to use a separate programmer/animator, these guys have to be good. This is not easy stuff and both guys need to be artistic - if you have a programmer who looks at a bunch of moves to implement as a tasklist to be ticked off whenever something is in the game, that's not good enough. The same can also be said of an animator. You will never get any of these right the first time and you will have to throw away lots of stuff to get a final result that works well. Fragile egos will not help here.

My last game was exactly what you described - a PoP wannabe that never was. I decided to go the more technical route of dynamically evaluating the collision mesh around the character to figure out what could be classed as interactive parts of a level. This was because we started level construction very simply: by creating low-poly collision meshes to outline where the player would go and how the game would feel. Artists could knock up functional levels very quickly for prototyping. This base mesh would then go on to be a more refined collision mesh and used to build the visual mesh around it.

I'd probably do this a bit differently next time around: use the prototyping phase to design a set of collision primitives that can be assembled in a purpose-built level editor - making it easy for level designers to build a consistent experience for all levels in the game.

All locomotion was driven by a simple euler-based physics engine. The time-step was constant while the framerate was variable - any other way just doesn't make sense (and I'm continuously surprised by the number of people who like to write variable time-step engines). When the character jumped she was given an impulse and helped along with a continous force. We went over and over various values looking for the ideal jump length and height and fixed it with a simple test level (the trajectory of the character was somewhat controllable mid-air, similar to sonic games). From that point on, all levels used those distances to present a level that could be played well given the player's experience. (15 levels in, near the end of the game, I have horror stories of "senior management" trying to change the jumping logic altogether -- but that's for another day).

When falling through the air and looking for ledges to cling on to, I used a paralellogram derived from sweeping single line that represented the "hook point" of the player (above the head and half a metre in front) - you need something continuous or your physics engine will miss collisions.

"Ledge-walking," as we called it, was where the character was hanging onto something like a cliff edge and using her hands to move along it. It took a few days to figure out a water-tight algorithm for this but I managed to get something that worked without any extra data in a level other than the mesh connectivity (e.g. edge A shares vertex B with edges C, D, E. It is also connected via faces F and G). With that the character could walk on any ledge in the game.

Wall running was physics-based. The character would hit a wall and instantly the physics engine was given forces that would propel her up and wall or along it with a continuous wall-perpendicular force. This worked really well, allowing the artists to design curved sections of wall that the character could run along.

Given that there's lots and lots of numbers in all of this it's very easy to lose track. There are some that need to be exposed and some which just don't make sense. My job as a character programmer at the time was 20% implementation, 80% running around all the levels figuring out the numbers. Unfortunately you will be playing some parts so much, the differences will disappear and you really need a third party to evaluate what you're doing. You'll find that tools like curve-generators are useful in all this (give it a few points and it fills in the blanks for you). I would have given anything to be programming in something like GOAL at the point in my life.

There were lots of tricks that you don't think of ahead of time and need to be on-site during development to get into the design. A simple example was jumping. I wanted instantaneous jumping for the character - as soon as she landed, it needed to be possible to jump again instantly without dropping a frame. Practically players cannot achieve that level of accuracy and will more often than not miss the jump, leading to confusion if you have any platformy elements in the game. We solved this by listening for jump input about 0.5s before the player landed on the ground. Once landed, if a jump request was put through, the character would jump instantly with no delay.

Rays were being cast out into the collision mesh with abundance for the main character. For example when ledge-walking you need to keep a close eye on any floors beneath you to prevent ledge-walking into the floor (e.g. ledge-walking a ramp). Even though we had a very efficient scene representation, traversing the kd-tree for each ray became quite inefficient. I found it simpler and much faster to just put a 2.5m sphere query through the tree to return between 5 and 15 collision triangles that would be used for all ray queries for the character each frame.

The logic was driven by a very complex state machine. We went through several designs and eventually landed with one that could perform multiple state-transitions per frame. This simplified the design of the states quite significantly. Transitions were simple regular expressions that hooked into function calls in the engine (e.g. button A pressed, last state time > X). The state machine was also designed visually with the states implemented in code. The implementation had a lot of rough edges but I can't imagine doing anything larger than 10 states without being able to visualise it - and I don't think generating a visual representation from your existing code is ideal.

Animation blending was pretty complex and the main character would be playing up to 10 animations at any one time. However this was mainly because she could use a weapon while doing all her level-interaction moves. Having an animation system that can mirror animations at runtime will cut down on a lot of work (it's really cheap). Useful animation controllers were: transition, partial blend, sequence controller and multiple-axis blend. Open up the guts of your animation engine to the gameplay guy because you'll always be thinking of new ways to blend and make the playback smoother.

Characters were animated on the spot in the animator's tool using lots and lots of small clips. I was pretty adamant that none of the movement in an animation would make it into the locomotion of the game as I wanted 100% control over the gameplay. This made it a little difficult to prevent the character sliding across the floor but with some tailor-made blend timings you can pretty much nail it. Sometimes the animators would use editor techniques to animate on the spot but other times when that just wasn't practical (e.g. the jumps), we stripped the animation of the root node during playback.

As we found later on, however, this method simply does not work for anything that involves close interaction with the level around you or other characters. For example you can easily animate a character that doesn't slide when walking and can also perform more complex maneouvres while walking. Or, when in a beat-em-up, making characters connect with each other is a complete pain.

The problem with that is that you effectively have two different systems fighting with one another: the physics engine and animation engine are both trying to control where the character is in the world so you have to have physics that derives the bounds from the current character's position in the animation (it may even get more complex than that). We kept it simple and went all the way with the physics engine. For example our character would ledge-walk on the spot in the animation and use a simple constraint in the physics engine to move her through the world (the collision would push the character out of the wall). It worked well, took a bit of effort to remove any sliding but it may have been possible if the animation were not on the spot. However, our character could ledge-walk pretty complex geometries around inside/outside corners of varying angles which would be a bit of a pain with the other technique.

There's no ideal approach for solving these and this is what the current state-of-the-art animation research is trying to address. Unfortunately all I see in current games is that: a willingness to solve these problems but at the expense of putting all those cool moves in a game that make traversing a level fun.

We prototyped every aspect of the gameplay very early on and had a rudimentary game you could play within 2 weeks of starting the project - it was a sphere moving through a chequer-board world that demonstrated a few of the key moves in the game. I would suggest doing the same because even though we prototyped pretty heavily, it wasn't enough. There were lots of small issues that came up that we never saw. However, because we prototyped heavily it also caught a lot of those problems early on and we were able to work around them.

Finally, the game wasn't going to be fun. We did a lot of things right but tried to compete with a team that was 10 times our size and had a release date that continuously shifted. A few more people and a dose of realism as to what we were capable of would have helped a lot. Keep your design simple, demand a large prototyping phase (ideally longer than your production) and once in production, treat any attempt to modify the controls with great skepticism. If you don't have the technology to achieve a specific move/interaction in the game by the time you're out of the prototyping phase, don't plan on including it. If you're designing levels that need to be changed later on "once technology X comes online," you're not implementing a game where you can predict whether the final outcome is fun or not. If you're not prototyping the controls on the hardware and at the framerate of the final game then you're not prototyping it at all.

It's been a couple of years now so I've forgotten an awful lot of the details (e.g. controller history, velocity measurements) but the above should be a start.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment