I have a new design dilemma I would like some feedback on. Right now I'm working on the BattleTimer class, which is an advanced timer that derives from SystemTimer and has an additional set of features. One of these features is the ability to apply a floating-point multiplier that can speed up or slow down the pace at which the timer updates. So for example, if we are to update the timer by 20ms but a 1.4f timer multiplier is active, we'd update the timer by 28ms instead. This feature will primarily used by status effects which change an actor's agility, though I can't predict what future uses it may have.
Now here's the problem. Lets say I'm supposed to apply a 1.5f multiplier to every update, but the system the game is running on is fast enough that every update is something like 5ms or 7ms. Well our timers only have an accuracy of 1ms, so we can't add 7.5ms or 10.5ms like this multiplier specifies. And you may say "so what, its only 0.5ms?". Well those missing half milliseconds are going to accumulate, and over the course of a typical 12 second idle wait time for an actor, that could theoretically lead to around 800ms of "lost time". Almost a full second off from where we are supposed to be.
So the solution I'm thinking of adding is one that waits for a certain amount of time to expire before applying the update. For example, it waits until 10ms has expired (across one or more loops) and it applies the multiplier to every 10ms that has passed, instead of at every single update time. Then those extra few milliseconds that were lost/gain are applied to the timer accordingly.
The problem with this approach is that it limits the granularity of the multiplier. If we specify a multiplier of "1.55", well its only going to have an effect of "1.5", since we still can't apply that extra 0.5ms (10ms * 1.55 = 15.5). We could solve this by making the update interval for the timer larger, but the larger we make the update interval, the longer it takes to apply the multiplier and thus our timer values would seem "jumpy". For example, an interval of 100ms would allow us a multiplier accuracy of 0.01 (as in, any change by this degree would be processed correctly), but for a multiplier of 1.5, we'd see a sudden 50ms jump every 100ms as the multiplier gets applied at that interval.
Right now I'm thinking we should stick with an update interval of 10ms, so multiplier effects are added at a small interval and should not be noticable to the player, and forget any desire/attempt to apply multipliers to a degree of accuracy that is greater than 0.1. What do you think? Is 10ms a good multiplier update interval to use? How about 20ms instead (giving us a multiplier accuracy of 0.05)? Or is there a completely different solution that is better and that I haven't though of yet?