For this challenge, I thought it would be fun to share a few different ways of calculating 2^24 in Unity and the reasons why you might choose each option.
So, here are 4 potential options - Some good, others not so much. :
//1. The wrong way
Debug.Log(2^24);
//2. The long way
Debug.Log(2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2);
//3. The usual way
Debug.Log(UnityEngine.Mathf.Pow(2, 24));
//4. The precise way
Debug.Log(System.Math.Pow(2, 24));
And these produce the following resultsâŚ
So whatâs going on?
The wrong way:
Itâs quite common to use the caret (^) when writing super-script in plain text but in C# (and other languages) the caret is used as a binary XOR operator. In this example it basically looks at the binary representation of the two numbers and tells you where they differ, so so 00010 ^ 11000 = 11010, or 26. So this is definitely not the way to go.
The long way:
This seems like a pretty ridiculous way of doing it but can have itâs advantages under certain conditions.
For 2^24 the readability and speed will both suffer, but for smaller exponents it can sometimes be a better option to just multiply them out and avoid the POW functions.
The usual way:
Ok, this how we usually do it but whatâs with the weird answer. In this case the answer is being converted into scientific notation. Luckily for us there is no loss of precision in this example (even though it looks like there might be), so we could cast this back to an integer and be just fine.
The precise way:
Since UnityEngine.Mathf.Pow only accepts floats, we can run into precision problems with larger numbers. When that happens we can always fall back on the trusty System.Math.Pow, which accepts doubles. Since doubles are twice as large as floats, it can give us more precision at the expense of using more memory.
The DIY way:
Wait, I said 4 examples. Yeah⌠well⌠you read this far so hereâs a bonus!
You could also write your own Pow function if you were bored enough.
There are a few ways of doing it but hereâs one example. Just note that itâs almost always better to use the built in functions since theyâve usually been tested and optimised far better than something you write yourself (at least in my case!)
double Pow (double a, double b) {
double result = 1;
for (int i = 0; i < b; i++) {
result *= a;
}
return result;
}
So there you have it. Iâm almost certainly missing some other methods, but hopefully this covers the majority of them. Thanks for reading!
When dealing with math operations it apparently helps to check into a couple of things to cover all the bases (rules), as there seem to be some singularities like this.
So look up things like âpropertiesâ and âidentityâ of multiplication, addition, powers, etc., and youâll see some of these quirks explained.
On the one hand 0^x is always 0, on the other x^0 is always 1. So technically, the answer is undefined.
However, itâs generally accepted that 0^0 = 1 for most practical purposes.
There are many reasons for this and equally many ways to justify that choice. Needless to say that going through all of them would take some time. Just know that pretty much every calculator and maths API will generally give you the answer of 1.
Itâs also worth mentioning that the code snippet is just a short example for reference. Itâs formatted for readability rather than being bullet proof to all the weird and wonderful things you might choose to throw at it.
Edit: Iâd also say that covering 0^0=1 is probably a little out of scope of the original video, as were negative exponents. If you can make one of the live calls with Ben then it might be a good topic of discussion for anyone interested.
All interesting but I would say that Ben should mention the quirk (and possibly negatives) so we are aware of it even if he doesnât do into fine detail as to whyâŚ
Though I probably knew this in my school days Iâve certainly forgotten things over the years in betweenâŚ
I see the current video on indices as more of a brief introduction to the topic. There will certainly be additional ones coming that cover all the questions you still have.
Have to also be careful when bit shifting not to overflow or attempt it with the wrong numeric type. Especially in the case of trying to emulate raising to the power because youâll start dealing with large numbers very quickly and overflows can become a real risk.
Bit-shifting is another way to go but as @topdog mentioned, you can run into some issues when doing it that way.
For me, the biggest reason not to do it that way would be because itâs not particularly readable and isnât immediately clear that youâre trying to calculated powers.
Thatâs Interesting. I just tested it on a few different calculators and both my Casioâs return an error but all the others return 1 - lucky I covered myself by saying âpretty much everyâŚâ
I guess Casio just didnât want to take a stand on whether to calculate x^0 or 0^x
Either that or thereâs an option in there somewhere and you have to explicitly tell it which one you want.
Iâd say Texas Instruments is on par with Casio in terms of popularity but the battery died on mine when I went to test it. If anyone else has one they can test Iâd be interested in the result.
I just posted the picture to point out that 0^0 is actually undefined. While it might be widely accepted to define 0^0 = 1, one still has to test 0^0 on the respective device or log the value of 0^0 into oneâs console.
Yep, 0^0 is technically undefined, as I mentioned in that original response. As you say though, itâs always worth double checking things like this on your instrument of choice to see what answer it will actually give you, especially when it can one of several valid answers.
So 0 == 0 && y == 0 is definitely a case where I would add a restriction in my shader code no matter what the (current) output might be.
And this is also a good example of why it is important to explicitely define something that is actually undefined. I was not able to find any information on 0^0 in the Unity API regarding Cg.
Thatâs also very interesting. Iâm not much for HLSL, but Iâm currently trying to learn it (specifically compute shaders).
Testing in Unity, C#, C++, Java, and JavaScript I get a return value of 1 for each. Iâm guessing that they all share a common implementation (similar my example in the OP), which sets a default value, whereas HLSL doesnât. Probably because GPUs donât particularly like loops and branching statements.