Why is FPlatformTime::Seconds already past 6 months?

From the comments of FPlatformTime::Seconds()

// add big number to make bugs apparent where return value is being passed to float

Why does this expose bugs?

Starting from 0.0, the minimal increment of a float (32 bit floating point) is 3*10^-6 seconds (3 micro seconds).
However, the farther away from 0.0 you get the more this decays. After a week of continuous play the counter would only be able to increment by 7/1000ths of a second! Races can be lost by less than that!

At the arbitrary point Epic chose to add to Seconds(), the minimal increment is two whole seconds.

What’s the impact in your code? If your game (or multiplayer server, for a more realistic case) ran for 6 months straight and you had stored LastFiredTime as a float, then you would possibly have two whole seconds of slop before your Fire routine had realized any time at all had passed. That would look very sloppy to the player, who is seeing that they’re unable to fire up to two seconds after they should have already reloaded!

It’s a bit of tough love that will expose bugs that you shouldn’t have in the first place. You can pre-emptively thank them for saving you from tearing your hair out 6 months post-launch of your amazing multiplayer game.

See a more thorough explanation at this site:

2 Likes

Here’s the code I used to see minimal increment values if you want to play around with it:

#include <iostream>

union Float_t
{
    int32_t i;
    float f;
};

float delta(float val)
{
    Float_t incremented;
    incremented.f = val;
    incremented.i++;

    float d = incremented.f-val;
    return d;
}


int main()
{
    auto hour = delta(3600.0f);
    auto day = delta(86400.0f);
    auto week = delta(7*86400.0f);
    auto year = delta(365*86400.0f);
    auto epic = delta(16777216.0f);

    std::cout << "hour: " << hour << std::endl;
    std::cout << "day: " << day << std::endl;
    std::cout << "week: " << week << std::endl;
    std::cout << "year: " << year << std::endl;
    std::cout << "epic: " << epic << std::endl;

    return 0;
}

And just in case you’re asking “Isn’t this going to be an issue with double as well?”:

After 1 hour: 4.5e-13s (.45 picosecond)
After 1 day: 1.5e-11s (15 picoseconds)
After 1 week: 1.2e-10s (.12 nanoseconds)
After 1 year: 3.7e-9s (3.7 nanoseconds) (Epic’s chosen time has the same minimum increment)
After 1 century: 4.7e-7s (.47 microseconds)
After 1 millennium: 3.8e-6s (3.8 microseconds)
After 3.5 billion years we run into an issue… 16 seconds!

When programming large systems intended to run for long times, a team I was on tended to use integers to count time. A 64 bit integer ticking every X microseconds (I think it was around one tick per 200 or so microseconds in our case) can go millions of years without rolling over. Floats and Doubles are for approximate work only! (such as physics computations, as I read somewhere our knowledge of physics has only been experimentally tested to something around 15 significant figures). Note 32 bits isn’t enough…2**32 microseconds is about an hour.

I imagine floats were used so the time could be immediately given in seconds for physics computations. Trading performance for ability to play for months. Of course now that graphics cards have doubles “native”, doubles could be used. When I first looked at GPU programming, doubles on graphics cards were a new thing so most code stuck with floats.

Using fixed point values also makes operations more predictable as long as you don’t need to be more precise than the minimum value. Without a strong type system that can encode the scaling factor it can be easy to improperly use a milliseconds value where the parameter is asking for seconds. On the other hand, with floating point you need to be careful of so many things… First that come to mind are 1) not exceeding the number of significant digits (approx 15 for double) and 2) not accumulating error in continuously applied math operations. But they are still incredibly useful if you keep well inside the bounds! I agree that a lot of the use of 32bit floats in game development comes from it being more efficient (keeping the earlier tradeoffs in mind…) in physics engines and GPU calculations, giving you more time per frame for the rest of the game.

1 Like

Privacy & Terms