Bulls and Cows - Is there much benefit to using int16 or int8 over int32?

I’ve noticed in VS IntelliSense that there are smaller int sizes (8 and 16 bits) that I could use.
Is there enough of a performance benefit to replacing existing int32 declarations with int8 or int16 where appropriate or should I leave them as 32 bit?

I would just do Int32 its for Unreal and it should be mention why you should be using int32 in the lectures.

That isn’t an answer, also int8, int16, int32, and int64 are all unreal types.

Ok thanks I thought it was just int32 and int64.

In short, no :slight_smile:

Such optimisations are generally left for the end of a project and are only performed when needed. Until then, readability and usability remain key, so the advice is definitely to use int32 by default unless you really need bigger numbers or are trying something more specific.
Although there is a theoretical performance improvement that you are thinking of, it is key to understand that modern devices are stupidly fast. And I mean STUPIDLY fast. That should be your base assumption. From there, you can derive the potential benefit of changing the couple of integers in the BullCow project to save a matter of bytes of memory where your device can handle billions per second. In practice, the performance will remain the exact same.

In fact, I’d like to introduce to you the first rule of optimisation: “Don’t” :smiley:
For a little more reading on that: https://wiki.c2.com/?RulesOfOptimization

2 Likes

Thank you for confirming what I thought!

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.

Privacy & Terms