Double, double, type-ing trouble!

In lecture 20, at the 4:45 mark, we are challenged to write a line for updating the computer’s guess. I used this:
guess = (0.5 * (max - min)) + min;

The instructor used
guess = (max + min) /2;

The compiler likes his answer just fine, but tells me I’m trying to use type double where int was defined. In both cases, the computer is dividing by 2, so at some point the int variable is going to be asked to store a fraction. So why is the computer barking at me but not the instructor?

1 Like

Hi @Cardynal,

In this;

guess = (max + min) /2;

all the values are integers, the result is rounded into an integer.

In this;

guess = (0.5 * (max - min)) + min;

you are multiplying by 0.5 (not an integer), if you instead divide by 2 it will probably work, alternatively you could cast it to an integer, or, somewhat more messily, change guess to a float and then round it!

3 Likes

Thanks Rob!

This reveals a characteristic of C+ of which I was unaware. I thought that if you were using integers, you had to make sure they only dealt with whole numbers. I had no idea C+ would round a fraction in order to preserve the integer status. I’ll add this to my bag of tricks! Thanks!

I took a short C++ class (no pun intended) in college 14 years ago. I see the scripts used here are C#. I wonder how much different they are? Or more specifically, how much grief the differences are likely to cause me in writing Unity scripts?

You’ll be absolutely fine, if anything throws you just pop a question up and someone in the community will help you :slight_smile:

Privacy & Terms