Isn't it inefficient to check isInrange before target is identified?

Normally I wouldn’t be this pedantic but I’m very conscious of performance optimization within Update() and Update alone.

In this lecture we check for isInRange in Update() before a target is even identified as not being null. Wouldn’t it be more efficient to first check whether a target exists before performing the calculations?

My code is:

if (target != null)
				bool isInRange = Vector3.Distance(transform.position, target.position) < weaponRange;
				if (isInRange)

I did some more research on this concept generally within the Unity community and find a kind of “blase” approach to Update, as if no one really worries about optimizing it. The reason normally given is that “you’re bound to run into other optimization problems before Update”. I thought this a good opportunity to ask two additional questions if anyone has the time -

  1. Is the reason that we check isInrange before even checking if a target is not null so that we can keep the FPS consistent? I know this is a trivial example and would make no difference to FPS, but is it advisable to do it the way described in the lecture as opposed to the way I propose purely so that we don’t run into any “FPS gotchas”. For instance if isInrange was very, very expensive (which of course it’s not here, I just mean hypothetically) it might be very difficult to otherwise pinpoint why the FPS drops when we attack a target as opposed to when we don’t? If we do it the lecture way this would be easy to identify since the FPS would be more consistent everywhere; if we do it my way it would be hard to track. Is this the reason it’s done the way it is in the lecture, so we can ensure a consistent FPS at all times everywhere by even making unnecessary Update calls, or am I overthinking things?

  2. Why does the Unity community seem to have this general ‘blase’ approach to Update() calls? Coming from Phaser I found the limiting factor in FPS for games to be Update() calls, and I’m also thinking of creating 2D games in Unity as well where there might be many thousands of cheap texture sprites but each with comprehensive and costly Update() AI logic. Being blase about Update in Phaser certainly didn’t help me, it became the sole and main focus of optimization especially for “cheap to render” game objects. Should Update optimization in Unity be taken seriously or is it true that there will always be other limiting factors before it becomes an issue unlike in other frameworks such as Phaser? Is this answer still the same considering cheap 2D sprites for a 2D project where each sprite might make complicated Update() calls?

I realize this is a lot to ask but I’m thinking perhaps the answers would assist many future students wondering similar things in addition to myself. Thank you very much for the time if you’re able to answer as this would clear a lot of things in mind about optimization intuition coming from Phaser.

@sampattuzzi Any input on this and the optimisation of Update()

I think you will find there are more gains to be had in determining when things do NOT need to be updated in every frame at all, and so perhaps having some classes which are not derived from MonoBehavior, than in switching the order of operations and calculations in this specific case. Then for things that do have to be in Update calls, more worth worrying about performance-wise.

That’s not to say you’re wrong here for another reason though - the approach you’re using is more correct, because as it stands (in the lecture’s code at this point) if line 13 doesn’t throw an error in calculating whether it is in range, then the test for target != null is pointless and always true in the subsequent line 14.

Prior to that, you’ll see Sam’s getting a null reference exception in the bottom of the screen (before a target was set), and exceptions and exception handling are considered costly for performance.

This has muddied the expressiveness of the code, and future-you could be left wondering if there isn’t something else missing or wrong that caused it to be this way. It’s possible/probable Sam had grander ideas for the code in the next lecture or two (and/or caught the issues anyway) so then it may be clearer, but for now at face value it’s definitely questionable and the greater of the sins committed.

However as said, from a performance profiling standpoint, you’ll find some operations performed in an Update are many orders of magnitude higher cost than a vector subtraction and float compare, such as initiating new object instances, or looping operations, or unnecessarily repeated reflection calls to get a component that should have been cached and referenced.

So before doing optimizations, my advice is to learn to use the profiler and see where the costs are actually having impacts. Spending your optimization time fixing something that may not even make a 0.001ms difference in a frame is not a good use of that time at all.

You could, for example, do many vector subtractions or distance calculations in the Update and never feel a difference, perhaps unless it was on a massive scale like an RTS perhaps with thousands of units (and if you had to do that, you’ll probably be looking at a DOTS model anyway for optimizing).


I will add that I probably just got the order of the checks wrong and didn’t consider the performance cost. I will say that a null check in Unity tends to be overloaded and isn’t actually has cheap as it should be. This is to allow for objects to be destroyed in the scene and set the references to null.

That said, I would say the general blase attitude may well be due to a focus on more graphically intensive games. When considering performance, I would definitely do a first pass consideration before even profiling. How expensive is this operation? How many objects in the scene will be calling this Update? That sort of thing can help you not to over optimise.

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.