Warning C4018 signed/unsigned mismatch Fix

If anyone is getting this error its because of signed/unsigned mismatch
for example if you did this the compiler spits out the error

for (int32 MyGuessChar = 0; MyGuessChar <  TheGuess.length(); MyGuessChar++)

to fix you need to cast it(still learning what that means) so do this

for (int32 MyGuessChar = 0; MyGuessChar < (int32) TheGuess.length(); MyGuessChar++)

If anyone has more information on casting please let me know.

1 Like

When you cast something you are converting an entity of one data type, expression, function argument, or return value into another. You have to be careful thought, because casting can lead to all sorts of issues.

And on a side note, I don’t believe your code is doing what you want. The .length() method returns the length of the string in bytes. http://www.cplusplus.com/reference/string/string/length/
I am guessing you want to get the length of the TheGuess string and use that as the boundary of how large MyGuessChar can be increased. To do that you are going to want to get the total byte count and divide that over how many bytes a char is. And you don’t want to hardcore a char byte size because it can change depending on the platform. You want to use the sizeof method to get the byte size of a char on your platform.

Also, I looked at the Unreal coding standard and they say to make TCHAR the alias for a char.

using TCHAR = char;

for(int32 MyGuessChar = 0; MyGuessChar < TheGuess.length()/sizeof(TCHAR); MyGuessChar++)

that is probably what you want to do. This should get rid of that error as well.

That’s neat. I still have alot to learn. Ill practice using sizeof()

so basically

MyGuessChar < (int32) TheGuess.length()

says MyGuessChar (of type int) is less than TheGuess.length() (expressed as an int)

and

MyGuessChar < TheGuess.length()/sizeof(TCHAR)

compares Byte size and not explicitly casting to another datatype, if i understand this correctly.

Doing some more research. https://stackoverflow.com/questions/905355/c-string-length
if you assume a char is 1 byte. length() will return the correct character count as each char is one byte. So 5 characters is equal to 5 bytes and opposite is also true.

Now the signed/unsigned mismatch is something totally different. An int is signed. meaning it can hold values from
-2147483648 to 2147483647. Or -(2^31) to +( 2^31) - 1. It’s - 1 for the positive values as zero is considered positive. The most significant bit (the one farthest to the left) is used to tell the system if the number positive or negative. Therefore, one of the 32 bits is used up, leaving only 31 bits that can be used to store number values.

length() returns an unsigned int. meaning it can store values from 0 to 4294967295. Or (2^0) -1 to (2^32) - 1. The mismatch error the compiler displayed was telling ya, hey I need 32 bits to store my result from length(), but you are only giving me 31 bits. When you cast it to the int32, you said, I know length() needs 32 to bits, but I know the result of length() will never get bigger than 31 bits, so its okay to use an int32.

Now, this is all from memory, so while I believe what I said is accurate, I did not fact check it.

1 Like

I really appreciate the clarification, this make alot more sense now.

“length() returns an unsigned int. meaning it can store values from 0 to 4294967295. Or (2^0) -1 to (2^32) - 1. The mismatch error the compiler displayed was telling ya, hey I need 32 bits to store my result from length(), but you are only giving me 31 bits. When you cast it to the int32, you said, I know length() needs 32 to bits, but I know the result of length() will never get bigger than 31 bits, so its okay to use an int32.”

Thanks :smiley:

Privacy & Terms