Shifting a number by 1 to the left is equal to a multiplication by the base you’re working with (multiplication by 10 in base 10, multiplication by 2 in binary, multiplication by 16 in hexadecimal, and so on).

Starting with a value of 1, the number of times you shift to the left is the same as an exponent of your base (ex: `1 << 1 == 10`

, whether in binary or in base 10).

Since we’re multiplying twos, binary operations would be ideal.

So we simply take 1 and binary-shift it to the left 24 times, making it the same as 2^24.

Assuming we’re coding in C# and using an `int`

(32bits should be quite enough to avoid overflowing in this case):

`var result = 1 << 24;`

That way, instead of making 23 multiplications by calculating an exponent, we simply look at the binary expression of 1 followed by 24 zeroes (`1 0000 0000 0000 0000 0000 0000`

), which is how the `int`

is stored as anyway.

When converted back to base 10, we get the answer 16 777 216 (which some will recognize as just 1 more than the largest unsigned integer that can be represented with 24 bits).