Render buffers

17 Jan 2016

After adding JSON-based planet definitions and support for multiple planet components, the renderer needed to be adapted to be able to render multiple component polygons. I initially implemented the communication of game state between the game engine and the renderer by means of the snapshot classes K14GameSnapshot, K14PlanetSnapshot and K14EntitySnapshot, which were also used for freezing/unfreezing game state and to represent the start state of action replays. At the time, this seemed like a good idea: the goal was to decouple the renderer from the game engine, to allow moving rendering to a separate thread at some point in the future and maybe run the renderer and the game logic at different frequencies. To achieve this goal, some form of double-buffered game state was required, and since we already had to make carbon-copies of the game state for action replays and freezing/unfreezing, why not use the same set of snapshot classes for all these things?

So I set out to modify the K14PlanetSnapshot class, replacing the previous representation of the planet topology (a single edge-list defining the planet surface), by a list of planet components represented as an array of K14PlanetComponentSnapshot instances, carbon-copies of each K14PlanetComponent. The planet component data included the component name, vertices, edges, its attachment points, visual attributes such as its color, etc. Basically anything you would need to render and/or serialize/deserialize the planet component.

In the process of making the necessary changes, a feeling was creeping up to me that I was repeating the same kind of code a little too often. Planet components, for example, were now represented by the K14PlanetDefinition class (for initial planet state), the K14PlanetComponentSnapshot class (for run-time read-only snapshots for rendering and freezing/unfreezing the game), and by the K14PlanetComponent class (the modifiable in-memory representation of the component). To efficiently render the planet component, the renderer would take the planet component snapshot, triangulate it, and create vertex and index buffers that could be reused on subsequent rendering passes, adding even more code that depended on the data representation of planet components. Because game state (e.g. player stats for the score board) and entities (sprites, visual effects) were also passed to the renderer by means of the snapshot classes, the same observations applied to almost every other part of the rendering code: many parts of the code were sharing the same lowest common denominator data structures (the K14...Snapshot classes), effectively creating a tight coupling between unrelated components of the game. Any time a property was added, removed or changed that was used by either the core engine or the renderer, code in all these components needed to be touched.

All this was spoiling the fun, draining gumption, and slowing progress. It was starting to become obvious that abusing the snapshot classes for multiple purposes (serializing the game, and communicating game state to the renderer) was a bad example of code reuse. The disadvantages were far outweighing the advantage of reusing the snapshot classes for
multiple purposes:

  1. Any piece of game state that needed to be serialized and/or rendered has to be represented (at least) twice: in a run-time, in-memory class, and in a snapshot class.
  2. On every frame, all game state included in the snapshot classes has to be copied before sending the snapshot to the renderer, including game state irrelevant to rendering.
  3. The generic nature of the snapshot classes (carbon-copies of the game state), makes them unsuitable for incremental updates. If only 1% of the game state changed, a carbon copy of the complete game state still needs to be made for every frame rendered.
  4. Sharing data structures between (ideally unrelated) components promotes tight coupling, and actually decreases reusability of the more complex components (e.g. the renderer) in favor of reusing the simple components (the snapshot classes).

Render buffers

To untangle this knot before it got entangled even further, I decided to remove the concept of ‘snapshot classes’ altogether, and to create a strict barrier between the renderer and the game state. To pass state over this barrier, a new class K14RenderBuffer has been implemented, to which ‘render commands’ can be added that represent low-level rendering operations. Examples of render commands are K14RenderRectangleCommand, K14RenderSpriteCommand or K14RenderTextCommand.

Render commands for specific rendering operations only carry state required for rendering, and are otherwise completely decoupled from game engine state. For example rendering a sprite using a K14RenderSpriteCommand involves setting up only the absolute minimum of state required by the renderer: the sprite position, size, rotation, its texture, and in the future possibly properties and flags to indicate visual effects and such. Likewise, a rectangle render command contains just the rectangle position, size and color, and nothing more. This means that, for example, the renderer does not know if it is rendering the rectangle for a touch area, or a rectangle representing a projectile.

Obviously, the renderer needs some additional facilities to allow more complex rendering, for example to enforce a certain drawing order for render commands. For this purpose, the base class for all render commands (unsurprisingly called K14RenderCommand) adds a layer property that indicates drawing order, a type property that indicates the command type, and a renderBits bitfield property that allows grouping and querying render commands in various ways. Examples of render bits that can currently be set are K14RenderCommandBitsEntity or K14RenderCommandBitsOSD, which are used to filter sprite-, and rectangle render commands for entities and projectiles, and to draw all OSD elements on top of everything else using normalized device coordinates instead of world coordinates. Similarly, the renderer needs to be able to uniquely identify and relate render commands between frames, to be able to setup and re-use internal structures that only need to be updated if the underlying render command changes. An example of this is the planet surface polygon, which is currently rendered using an OpenGL vertex buffer object (VBO), but rarely changes. For this purpose render commands have a string-type id (name) property that can be used to relate render commands between different frames, and a timestamp property that indicates when the command was created.

The game loop and renderer use render command name and timestamp to implement a simple copy-on-write approach: before each frame the view controller that drives the game loop takes the render buffer for the previous frame, and only updates it for render commands that should be changed or removed. This is done in a way that ensures the renderer does not see the changes until the old buffer can be swapped for the updated buffer, by copying the K14RenderBuffer instance, which effectively only copies the underlying container holding pointers to the (immutable) render commands. Looking from the other side, the renderer can use the command name and timestamp to relate them to internal state that may be reused or needs to be released. For example a polygon render command named planet_component_1 may be related to a polygon VBO. As long as the command stays in the render buffer with the same timestamp, the VBO can be rendered directly. If the same command shows up with a new timestamp the VBO can be recreated or updated, and if the command disappears from the render buffer, the VBO can be released.

In the future, the K14RenderBuffer command may get some more advanced ways to query and sort render commands based on a combination of the command type, command bits and the render layer, which would hypothetically allow different rendering backends (OpenGL ES, Metal, software rendering, …) to process the render commands in a way that best matches the target device. I’m pretty sure I will never actually need this for performance or portability, but it was easy to add and may in the future simplify (or actually, further ‘dumb down’) the renderer code. Right now the renderer is very ad-hoc, and encodes knowledge about what you will always find in the render buffer for this game: it first gets all polygon commands (which will always be planet components) and renders them, then gets all sprite render commands (which will always be entities), followed by all rectangle commands with the ‘entity’ bit set (always projectiles), and last but not least all remaining commands (rectangle and text) for the OSD elements. This could be simplified a lot using smart queries and sorting, to draw all render commands from a simple switch statement with correct drawing order and minimal OpenGL state changes.


Serialization and deserialization of game state is now implemented directly on the K14Game, K14Planet and K14Entity classes by means of NSCoding. Where previously, a K14...Snapshot instance implementing NSCoding was created from the persistent game state, the full object graph including parent-child pointer relations (e.g. game ↔ planet and planet ↔ entity) is now serialized to an NSArchive from the K14Game encodeWithCoder: method, letting NSCoding figure out how to deal with weak pointers (using decodeConditionalObject) etc. This also obviated the need for initWithSnapshot initializers for the game, planet and entity classes, which had to ‘parse’ the snapshot and recreate the game state object graph from them. Instead, deserialization is now implemented using the bog-standard initWithCoder initializers you’ll find in any iOS/OS X program using NSCoding.

Entity-Component model for scripting

Having untangled rendering and serialization, I felt like extending my ‘refactoring run’ a little longer, and changed the way scriptability of games, planets and entities was implemented. I had been reading up on entity-component systems (ECS), an architectural pattern often used in game engines (e.g. Unity) and more complex, high-performance games, and borrowed some ECS concepts to simplify some things in the 2k14: The Game code.

Entity-Component Systems are typically employed to enforce decoupling of entity data and behavior, to allow adding (possibly runtime) pluggable data or behavior, or to be able to improve performance by means of improved data locality by being a better fit for data-oriented design. A full discussion of entity-component systems is outside the scope of this post, so I’ll refer to the linked articles for a more in-depth treatment. The executive summary is as follows: instead of using object-oriented techniques like inheritance and polymorphism to extend and/or specialize the properties or behavior of entities in a game engine, an ECS architecture prefers composition, defining entities by a list of their components. In its purest form, an ECS treats entities as ‘empty shells’ without any data or behavior, represented for example by just a numerical id. Any data or behavior is added to the entity by associating its id with a set of components that implement said data or behavior. For example a ‘position component’ could add a coordinate pair to indicate the location of the entity, a ‘physics component’ could add its physical properties and behavior, an ‘AI component’ could add pathfinding, a ‘graphics component’ sprites and animation, and so on. Obviously this is a bit of an oversimplification that skips over the intricacies of implementing an entity-component system. For example if and how components should be able to share and advertise data, how to add components to entities, how to query the components (capabilities) of entities, etc. The articles linked above provide much more details related to the implementation of entity-component systems. It should also be noted that it’s perfectly possible (and often desirable) to take a hybrid approach, and add the most basic properties and behaviors of entities (e.g. position, size, etc) directly to the entity data structure (which could be a struct, or a traditional OOP class), and only use components for more complex, optional or pluggable data/behavior.

While I wasn’t (and still aren’t) convinced a full-blown/pure ECS architecture makes sense for the kind of game I’m making, the idea resonates with me, and I did recognize some of the problems entity-component architectures try to solve in the way I implemented the scripting layer of 2k14: The Game. Previously, scripting abilities for games, planets and entities were implemented by subclassing K14Game, K14Planet and K14Entity, overriding any parts of these base classes that should be handled by scripting code. The forced dependency of the K14Scriptable... classes on their base class introduced annoying initialization order dependencies, control-flow ping-ponging between the scriptable classes and the base classes when updating game state, the requirement to track both the native class type (K14Planet or K14ScriptablePlanet, for example) and the ‘subclass’ type (Lua class name for scriptable entities, none for non-scriptable entities), etc. Another deficiency of this architecture was that it would surely get me in even more trouble later, if I ever wanted to mix and match scriptable and non-scriptable entities with different (possibly optional) capabilities, requiring some form of multiple inheritance (or actually, a facsimile of it, since Objective-C does not natively support multiple inheritance). Not a nice prospect.

Anyway, I won’t bore anyone with more details, because none of this is concerned with problems related to the game itself, all these problems were self-imposed by limitations of the implementation language and an architecture that did not match the problem domain very well. Let’s focus on the code changes I made to fix or avoid code kludges resulting from insisting on using classic OOP inheritance.

I took a bit of an ‘ECS-lite’ approach to get rid of the derived K14Scriptable... classes, replacing them by ‘scripting component classes’ as you could find them in a ‘real’ ECS architecture. I stopped there though, and didn’t create components for other aspects of entities. Entities are still represented by the relatively ‘rich’ OOP class K14Entity, which directly implements all entity data and behavior, except its scripting capabilities. The scripting component is now always present, but in theory could be ‘attached’ to an entity instance after creating it, if needed even in the middle of a game. Appropriate hooks were added in e.g. the K14Entity step method, to call the script component step method if an entity has one. When the entity is serialized, advertised properties of the Lua class that implements the entity are retrieved through the scripting component and sent to the NSCoder as key-value pairs. When deserializing the instance, the scripting component can simply be re-created, creating a fresh Lua instance for the entity, which is restored to its original state using the serialized Lua properties.

I applied the same concept to the K14Game and K14Planet classes, even though they would have no part in a ‘real’ entity-component system, as they are neither entities nor components. I also didn’t reduce references from the scripting components to the scripted K14Entity to numerical id’s as you would usually do in a pure ECS, using some kind of ‘component manager’ to retrieve entities by id. Instead, I simply used weak pointers from the scripting component to its K14Entity. Summarizing, basically the only thing I used from ECS architecture is using composition instead of inheritance ;-)

It may seem like I spent a lot of time describing such a simple change from inheritance to composition for entities, but it cannot be understated in how many places this simplified the code. OOP is a useful programming paradigm and inheritance can be a powerful tool, but like any other powerful tool, when used blindly or careless, it will get you into trouble. I may have only used one aspect of ECS architectures (favor composition over inheritance) and still don’t benefit from any of its other advantages because I ignored others (data-oriented design, getting rid of a class to represent entities completely, replacing it by just a numerical id), but I already feel like the quality of one of the most important parts of the core engine of the game has improved a lot. I will definitely apply the lessons learned when adding additional core-engine capabilities to entities in the future.

Adding it all up

I’ve put together two simplified architecture diagrams to illustrate the ‘before’ and ‘after’ situations, which may be useful to put all of the above in perspective. First the ‘before’ architecture:

And the architecture after introducing render buffers, getting rid of the snapshot classes, and replacing the scripting classes K14Scriptable... by scripting component classes K14Lua...:

These should speak for themselves if you’ve read all of the above, so I’ll leave most of their interpretation as an exercise for the reader ;-). One thing that deserves mention is the color-coding in the diagrams. Using the red (core engine), green (rendering) and blue (scripting) boxes, I’ve tried to indicate the areas where these three components of the game overlap (in functionality or representation), or have some kind of internal (practical, implementation-related) dependency on each other, for the ‘before’ and ‘after’ situations. The way these boxes are drawn is not very scientific, maybe not even completely correct, but they paint a relatively accurate picture of the entanglement between the main components in the ‘before’ situation, and the clean separation of concerns in the ‘after’ situation.

Next steps

Now that I feel the core parts of the game are implemented in a clean and sane way, I finally feel like I can safely start adding additional complexity, in the form of new features. As mentioned before, the first thing will be modifiable planet surfaces controlled by switches and servo’s. Another minor thing that has been on my list for quite a while is adding support for serializing and re-creating joints, when freezing the game, or when creating/replaying action replays. Right now, joints such as the ship-orb joint are not serialized, which means any replay that starts from a state where the orb is lifted will be useless, just like looping a replay that ends while the orb is lifted. I will try to fix this ‘on the side’.

Development scoreboard

Compared to the total development time so far, I feel like I spent an enormous amount of time pulling the old code apart and re-assembling it without the snapshot classes and the K14Scriptable... game, planet and entity subclasses. I estimate about 20 hours, but I could be an hour or two off. I prefer to err on the safe side, so I’ll round up, and quote total development time at ~275 hours. Native code SLOC count went up by 769 lines compared to before I implemented planet definitions, for a total of 7172. Most of the extra code can be attributed to reading, parsing and serializing planet definitions. I was a little disappointed with this number, as I had hoped that removing the snapshot classes would actually reduce the SLOC count so much it would offset the line count added by new functionality, but as it turns out the boilerplate required to implement the render buffer and the render commands is about the same amount of code as was removed by deprecating the snapshot classes. Lua SLOC count is at 628, down from 649, which can be attributed to removing some debug/test code, nothing new was added.