Is using DeltaTime from LastMove correct?

We keep reusing the DeltaTime from the last move, but that only works if our own DeltaTime more or less matches that of the pawn’s owner, doesn’t it? If our framerate is twice that of the owner, we’ll be re-applying the move twice as fast as we should, and the car will move twice as fast.

I think a more accurate solution would be:

void AGoKart::Tick(float DeltaTime)
	if (GetLocalRole() == ENetRole::ROLE_SimulatedProxy)
		FGoKartMove Move = ServerState.LastMove;
		if (Move.Time != 0)
			ServerState.LastMove.Time = 0;
			Move.DeltaTime = GetWorld()->GetGameState()->GetServerWorldTimeSeconds() - Move.Time ;
			Move.DeltaTime = DeltaTime;

where I use the DeltaTime since the server got the state for the first tick after receiving the update (move.time != 0) and the simulated proxy’s DeltaTime afterwards until the next update.

The main reason for using DeltaTime is the differing framerates between the various clients. It does mean that there may be slight lags on some clients but it also means that different players with different framerates will approximately line up. Using the server time won’t correct this issue and may even complicate it (also it may not)

Using the server time just means they are all using the same time but the times of the moves still won’t match or be in the same interval based on the difference of tick times unless you enforce framerates and reject those who cannot hit and maintain a consistent frame rate, say 30fps. This you really shouldn’t do.

Also, when it comes to simulation, you may need to know later the difference in time between the 2 moves - this means working out the difference between the 2 server times so you can better extrapolate until the next confirmed move is available. While not covered in the course, this could then be used to estimate acceleration until the next move comes in, for example.

Really, it comes down to the frequency of the tick which in an ideal world would be 1/60th second which would be the most common, but easily vary from 1/15th to 1/144th of a second or less.

Then you have server update frequency. This is not the same as the framerate. usually 1/10th of a second is the sweet spot. having that more frequent could cause issues for those with lower framerates.

Why would they line up? If the DeltaTime of the move is 1/60th of a seconds, but one simulator renders at 30FPS and another renders at 120FPS, the former will apply 1/60th delta every 1/30th seconds while the latter will apply the same 1/60th delta every 1/120th seconds. So the simulated car will be twice as slow on the former and twice as fast on the latter than it should.
That’s why in my proposal I apply the simulator’s DeltaTime rather than using the Move’s original value.

Similarly, the simulators don’t account for the time it took the Move to go from the controller to the simulator.

From that point on, I lost you. I feel like I’m missing a context. For instance, how were you thinking of using the server time? I am using it to compute the delta between when the move was created in the controller and when it is received and simulated on the simulator. That make my use independent of frame rate, so that doesn’t seem to fit with your idea.

Simply because in an ideal world, that’s what you’ll get. In actual fact it is very unlikely to hit exactly 60fps or 30fps as it’ll probably fluctuate and so the delta will vary from frame to frame, subtly, but it will and this will throw everything off.

You can’t use exact values or rather expect them to match 100% because of a number of factors. First, CPUs don’t always give the exact same value for a float - I have 2 PCs here and one is a Xeon and its calculations produce different results from the i7 The variation is slight but enough. Second, you cannot rely on exact multiples for framerates or even exact consistent framerates. Next, you need to be able to handle packet drops or lag, you may get information that is delayed. Finally, the network updates won’t match any of these either.

So, ideally, tick/TickComponent may happen at 1/60th second but in reality is going to be a give or take a few microseconds to a few milliseconds, more if frames are being dropped due to load. Network updates may be set to 10 times a second but in reality it will again vary by a few microseconds or more and then delayed from the time the client sends to the server based on where the server is - depending on the comms and server location this could be 3-4 milliseconds to 1/2 second or more based on network latency. Typically, between US and Europe the latency is ~200ms, Europe to Australia it is upwards of 750ms.

All this impacts what is going on and so all you have that is a known is at some point is a tick will occur. Now, why the delta is important is it is as precise as it can be. Getting the time on the server takes time and so the delta will not necessarily be the same difference between the last move and the current using the time - the delta will be off.

Put another way, DeltaTime at the start and end of the Tick is exactly the same. The server time is not.

This is really complicated stuff and it is easy to get caught out on this.

I can’t remember where exactly but this is dealt with somewhere in the course, I think further on that the lecture you’ve posted on.

Yes, I agree with what you said and appreciate the long answer, but I’m not sure where this is going though, or how that relates to my question.

Hmmm, I see… maybe you misinterpreted the title of the thread? I’m not asking about DeltaTime vs Time, but about LastMove.DeltaTime vs some other delta time, about the fact that LastMove.DeltaTime is a value coming from the controller/owner and is meaningless the simulator, especially with regard to its own tick rate.

Well, the DeltaTime tells the simulator precisely how long the wait was between moves. The use of times gives an approximation as the time is constantly changing. The actual time of the move doesn’t matter so much, you really need the delta so you can determine rate of change for the simulator.

So, you’ve not had a move for say, 1 second. This could happen. The simulator could use the delta to determine rate of change and predict what is going to happen, with increasing rate of error. The more accurate the delta, the better the prediction.

You do need this for every move, as later a series of moves is created and if there is lag, it can catch up. It’s very rare these days to see lag so bad that you see players jump but 20+ years ago it was common in online quake 2 tournaments where modems were used.

To be clear, when I say “simulator”, I mean “simulated proxy” (I’m just too lazy to type two words when one can do and I thought it would be clear enough). So there is no “receiving all the moves” nor “catching up when they arrive”. That’s only true for the server.

Well… the proxy will catch up to, once they receive the new server state. But for that, they don’t need the move. The proxies only care about the move when they need to extrapolate what the next state will be while waiting for the update from the server. And to do that extrapolation, they will use and re-use the last move over and over.
And the issue I’m trying to highlight is that this extrapolation/simulation happens at every tick of the simulated proxy but the simulation uses the DeltaTime of the last move. So if the last move had a deltatime of 1/60 (because the pawn’s owner runs at 60 FPS) and the proxy runs at 120 FPS, then the proxy will use a 1/60 deltatime every 1/120 of a second, making the simulation run twice as fast as it should.

And what you said does raises another issue: we do not simulate the pawn on the server at all, we just apply the moves as they come. And if they are delayed, then the pawn will not be updated. And since the course assumes that this is not a dedicated server, it means it will render the pawn at the same location for a while, which will appear jerky to the local player.
So we ought to have two states on the server: one “canonical state” which is replicated and when applying moves, and a “rendering state”, which would use simulation/extrapolation for a smooth experience by the local player.

Simulate, simulator, simulated proxy…I understood.

There’s a reason why AAA companies have a team for just multiplayer aspects of games. They can be really complex.

What you say about the canonical state is interesting. The server pretty much is the canonical state. The simulation does happen on the host as well, the only difference being the client and the server part of the host are together, not the same mind you.

So, each client runs smoothly and sends messages to the server. The client, get updates about other clients and when there’s a time lag, simulates the moves until it is caught up. So, the player on any client being controlled will always be in the right place for that client, for the others, not so much.

This applies to the host too. The host client will be fine but the other players will require synchronisation and simulation.

And…now my head is spinning :stuck_out_tongue:

Ultimately, the course takes a technique and runs with it. It does work. Is it perfect: no. Is it the only way to do things: nope. It does show how you might do these things and with that, you may come up with an amazing way of doing things which you do in your own game.

The beauty of programming is there is more than one way to do these things. This is one such way and it does work. It probably wouldn’t work for different game styles (FPS, Fighting etc) but maybe it will help.

Head about to explode here :smiley: :smiley:

As I understand typical multiplayer games (with non-dedicated servers), yes, the host has both a server and a client running at the same time, with the client and the server being mostly independent entities. In other words, it’s like a client running an integrated dedicated server. This keeps the server state separate from the rendering and allow the rendering to behave like a regular remote client (i.e. all that autonomous and simulated proxies stuff).
It’s not the case here. The server is the client for the local player. There is only one state for each car, the “authority state”, and the rendering use that “authority state” directly. There is no interpolation/extrapolation like remote clients do. So if an autonomous proxy does not send new moves, the server will keep the car in one location, while simulated proxies would keep updating the location hoping the next move won’t be too different from the last.

I wasn’t looking for perfect but I was expecting something more usable as-is, like Unreal’s UCharacterMovement is. Something I could use as a baseline or as a fallback. Something more… fleshed out.
As-is, it just feels… too incomplete, just the beginning of an implementation rather than a foundation.
There is nothing wrong with that, mind, it just wasn’t my expectation, so I can’t help but feel a bit disappointed.