The golden rule is that the significand (the number on the left) should be greater than 1 but less than 10.
Scientific notation is really just used to help with the readability of really big, or really small, numbers.
As an example, let’s say you had the number 5,231,356,100,000,000,000,000,000,000.
In this form it’s easy to say that it’s a large number, but to say how big would require counting all of the digits.
If we instead convert this to 5.23 x 10^27, we can now say that it’s “about 5 with 27 zeros” or “5 Octillion” without doing any counting.
In programming you’ll come across this way of representing numbers fairly often. However, the “x10” part will be replaced with “E”, for exponent. So you might see 5.23E27 instead.
I hope that helps clear things up, but if you have any questions then please let me know.