Input handling

24 Mar 2014

It’s been a while since the last update, a business trip and various other detractions grabbed enough of my time and attention to have too little left to finish this post. I did make some progress on the code front though, mostly related to input handling.

With planet setup and minimal rendering in place, the next thing I wanted to get sorted out was input handling. This may sound strange, with so many other things left to implement, why would I need input handling already? There is hardly anything in the way of gameplay, no game logic or graphics, almost nothing to control. My reasoning was as follows: many of the things that still need to be implemented and tested depend on specific in-game conditions. Turrets should fire when they can see the player, for instance. Collisions have to be triggered by actual entities hitting each other, or the planet surface. The camera should track the player ship while it moves. And so on… To hard-code all these conditions, either temporarily during development or permanently in a test case, would be messy, cumbersome and inconvenient. It would be much easier to be able to easily ‘replay’ certain in-game scenario’s, starting from some initial condition, as if someone was actually playing it. What better way than to simulate input events using the same input handling interface we will use later on when real touch and accelerometer (I’m planning to use the accelerometer to control player ship rotation) processing is implemented?

Input handling basics

The first design decision to take was how to communicate controller inputs to the game loop. A typical MVC-like pattern in iOS games is to have a view controller (UIViewController) that implements the relevant touch event callbacks it inherits from UIResponder. Inside the callbacks the view controller can do some filtering or interpretation of the touch events, for example to recognize gestures, translate low-level touch events (‘touch began/ended at coordinate x,y’) to higher-level semantic actions (‘user swiped left’ or ‘user touched button B’), and eventually it will trigger some kind of state change by calling some selector on some class representing the game state (the ‘model’ class of the MVC pattern). For accelerometer events the idea is very similar, with the following additions: the view controller has to register for accelerometer events using a CoreMotion UIMotionManager, and it will almost always have to perform some non-trivial filtering and interpretation of the raw accelerometer data. I will probably go into accelerometer handling details in a later post, for now just let’s assume we have a view controller that can reliably convert accelerometer readings into a simple, single-axis tilt angle.

One side node before we continue: the pattern described above, where input handling is implemented in the view controller, is not the only way to do it, and also not always the best way. For applications with complex view hierarchies composed of many custom views that can be re-used or composited in various screens, it may be beneficial to implement input handling inside custom UIView classes, or in case of a view controlled by multiple, layered view controllers, inside controllers below the root view controller. This makes perfect sense for certain applications, but in case of games, where you usually have a single UIView representing the ‘canvas’ to draw on, and a single (root) view controller, I believe the approach I described in the previous paragraph makes the most sense.

For this game, I wanted turn up the level of abstraction in the input handling by one notch, by creating an intermediate object representing higher-level input state and input events that can occur in the game. So instead of directly actualizing input events by calling selectors on the class holding game state, the view controller will register the event in a class named K14Events, which will be inspected in the next game update (ie: the next step call). Translating the input state into game state changes is up to the model classes. This approach has a few advantages:

  • Easy coalescing of inputs

    Multiple conflicting or canceling inputs between 2 time steps don’t result in jittery movement or unnecessary calculations to update game state that will be reverted or overridden by the very next input event.

  • Separation of detection and processing of input events

    This allows performing game state updates that may be computationally expensive (such as updating certain aspects of the physics simulation) on a background thread, instead of doing everything on the application main thread that drives input events on iOS.

  • Run-time control scheme extensibility

    In the future, it could be interesting to be able to dynamically add buttons or control zones dynamically using scripting, for example two control zones for left and right thrust, instead of a single one. The added abstraction layer could setup touch zones instead of having to hard-code them in the view controller.

  • Easier to simulate and re-play

    When all inputs events are passed to an intermediate ‘input event abstraction class’, simulating user inputs or replaying a set of pre-cooked input events does not depend on the view controller class, and don’t require emitting low-level input events such as accelerometer readings or x,y touch coordinates. Instead, simulating or replaying an input event will boil down to ‘set tilt to -10 degrees’, or ‘set thrust on’ by calling some selector or setting some property of the input handling class.

At this stage, the last of the advantages listed above is the most relevant one. Implementing simulated input events and replay functionality simplifies testing of new game features, and adding an intermediate input handling class makes it very easy to implement these things. The other advantages will prove their usefulness later on in the development process.

Implementation details

Implementation-wise the input handling class is pretty straightforward, so considering the length this post has grown to already, I won’t elaborate on it too much. The class provides functionality to get and set the active state of inputs labeled using non-predefined integer identifiers, which is another way of saying it wraps a dictionary mapping from integers to booleans. Additionally, it declares a floating-point scalar property tilt, which can be set according to filtered, processed accelerometer data. The class is called K14Inputs and its interface is declared like this:

@interface K14Inputs : NSObject

/** Device accelerometer tilt */
@property(atomic) float tilt;

// Getting and setting input active state
-(void) setInputActive: (int) input active: (bool) active;
-(bool) isInputActive: (int) input;

@end

As explained in the previous section, the basic idea is to have the root view controller for the main view handle the UIKit events emitted by iOS, and modify a singleton instance of the K14Inputs class associated with the current game accordingly. Input state is then queried in the step functions of K14Planet and the K14Entity subclasses, which react to it by modifying game and actor state, such as rotating the player ship on tilt, or applying force to it when the thrust input is active.

Putting all of this together revealed another oversight in the data model for game and entity state. If the view controller needs to write input state to a singleton instance of K14Inputs, where should it get this instance from? The topmost data model class for game state has been K14Planet up to now, but associating the input handling class directly with the active K14Planet instance didn’t seem right. There could be input events that result in modification of game state unrelated to the planet itself, for example. Speaking of which, where would game state like that be defined anyway? Stuff like fuel level, number of ships left, time elapsed, etc, it doesn’t make sense to define these things on K14Planet. The most obvious place for these would be in a parent-class that creates and manages K14Planet, in addition to holding game state unrelated to the planet itself. For this purpose I introduced a class K14Game, which at this point in time is not much more than an attachment point for the K14Inputs singleton and the active K14Planet instance. In due time, it will be extended with global game state properties and functionality to control the game loop (pausing, resuming, etc), handling game termination conditions, etc. Here’s a quick class diagram for the new situation:

Replay functionality

Using the K14Inputs class as an actuator, I hacked together some really simple action replay classes to assist in simulating and reproducing various game scenario’s during development, without having to write actual touch/accelerometer event handling code, and without having to ‘play’ the game on actual hardware to test it.

The K14ActionReplay class can be described as a simple list of input events, sorted by the timestamps at which they should be activated, and with a single method replayEvents which will ‘actuate’ every event up to the game time elapsed at the time of the call, then remove it from the list of events. To eliminate frame time variance resulting in run-to-run replay differences, a fixed timestep is passed on construction of the K14ActionReplay instance, which will then be used for K14Game step updates, instead of the frame time coming from the simulator.

Actuating an input event simply modifies the K14Inputs instance of the K14Game passed on construction of the action replay class to reflect the event. To separate advancing the replay from actuating the events on the K14Inputs instance, defined an interface K14Replayable with a single selector actuate that takes a K14Inputs instance to modify. Any object that implements this interface can be added to an action replay, for now, I implemented two: K14ActivateEvent and K14TiltEvent. The action replay can be associated with a game by passing it as an optional argument to the K14Game constructor, and will be advanced using its replayEvents selector on every invocation of K14Game step.

To round up this rather long post, here’s a code snippet that illustrates how to set up an action replay, with a screen recording of the replay running inside the current ‘game engine’:

NSArray *action_replay = @[
    [[K14ActivateEvent alloc] initWithTimestamp:2.75f input:K14_INPUT_THRUST active:YES],
    [[K14ActivateEvent alloc] initWithTimestamp:4.0f input:K14_INPUT_THRUST active:NO],
    [[K14TiltEvent alloc] initWithTimestamp:3.5f angle:-11.0f],
    [[K14TiltEvent alloc] initWithTimestamp:4.0f angle:0.0f],
    [[K14TiltEvent alloc] initWithTimestamp:5.0f angle:11.0f],
    [[K14TiltEvent alloc] initWithTimestamp:6.0f angle:0.0f],
    [[K14ActivateEvent alloc] initWithTimestamp:5.25f input:K14_INPUT_THRUST active:YES],
    [[K14ActivateEvent alloc] initWithTimestamp:6.0f input:K14_INPUT_THRUST active:NO],
    [[K14TiltEvent alloc] initWithTimestamp:6.25f angle:-11.0f],
    [[K14TiltEvent alloc] initWithTimestamp:7.5f angle:0.0f],
    [[K14ActivateEvent alloc] initWithTimestamp:6.5f input:K14_INPUT_THRUST active:YES],
    [[K14ActivateEvent alloc] initWithTimestamp:8.25f input:K14_INPUT_THRUST active:NO],
    [[K14TiltEvent alloc] initWithTimestamp:9.5f angle:11.0f],
    [[K14TiltEvent alloc] initWithTimestamp:10.25f angle:0.0f],
    [[K14ActivateEvent alloc] initWithTimestamp:9.5f input:K14_INPUT_THRUST active:YES],
    [[K14ActivateEvent alloc] initWithTimestamp:11.0f input:K14_INPUT_THRUST active:NO],
  ];
  
  // Create game instance
  K14Game *game = [[K14Game alloc] initWithInputEvents:action_replay];

Obviously, a single-shot non-repeating list of events of only 2 different types is a little limited, but it will do for now. I can foresee building up the action replay class though, maybe not just for debugging, but also to make gameplay recordings, for example for funky replay effects like the ones in Super Meat Boy :-)

Development scoreboard

Refactoring the data model classes to add the K14Game class and writing the action-replay classes took a little more time than I had expected, about 4 hours of my flight to San Francisco, and another 3 more afterwards. This brings the total amount of development time to about 19 hours. The current SLOC count is now 539, an increase of 200 since the last post, most of which is in the replay handling.