In this lesson one of the challenges was to do the if statement which I did. The only big difference was when I was declaring my variable I went int Fall = 3;
instead of float Fall = 5f;
. I understand putting the f behind the number is usually for decimals and it’s just good practice for whole numbers but why, in this example, would you use float instead of int?
Hi Zhaine,
The f behind the number tells the compiler that the number is of type float. Otherwise, a whole number would be interpreted as an int. 5.0 would be a double, which is a type in C#.
To avoid conversions between types, we usually use the “target type”. For example, the transform.position
in Unity is of type Vector3, which is a struct consisting of three floats.
Did this clear it up for you?
See also:
- Forum User Guides : How to mark a topic as solved
No . I still don’t understand. I just had it in my head that if you were using whole numbers you would use int and decimals you would use float. In this lesson it had me use float Fall = 5f;
. The only thing I can really think of is since it’s having to do with time if I’d want more precision, such as 1.5 seconds, I’d use float but otherwise if it’s going to be a whole number I’d just use int. Is this a correct way to think of it?
Edit: Also, if that is the correct way to look at it is there any good reason to use int if float works for both whole and decimals?
Yes, I would say that’s a valid point of view. The time is usually float meaning a value with decimals. If you used an int, you could only use full seconds (or minutes or whatever). We use the float type because the time is of type float as well. We could use an int type but “C#” would implicitely convert the value to a float if we use our int type in combination with a float.
The idea is the following: Imagine you write a letter in English to your English friend in England. Instead of Latin characters you use Greek characters, though, because reasons. Your friend, who does not know Greek and expects Latin characters for English words, would have to convert the characters first to be able to understand your letter. It’s a waste of time because you could have used English characters.
In programming, “good practice” is usually to use the target type unless there is a good reason to use something else. If a float is the target type, you ideally use the float type for your variable. The computer does not care about our human logic. And the int type is not the same as our integer even though it looks as if it was. In the end, integers and floats are just 0s and 1s for the computer. The type of a variable tells it how to interpret the countless 0s and 1s in memory.
Thank you very much for explaining that to me. I understand it a bit better now :).
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.