Welcome to the course @microbe14!
These are both great questions.
I’m not 100% sure of the first one but I don’t think there are many specific benefits to using infix notation over Polish notation when it comes to programming, since your compiler will eventually break everything down to machine code anyway.
I’ve not really used Polish notation much (I pretty much forgot it existed until you mentioned it) and that’s probably the biggest benefit I can think of as to why most “human readable” programming language use infix notation - If you want people to hit the ground running, it makes sense to use the same algebra rules that people are already familiar with when you designing your programming language.
As for your second question, basic multiplication really is just repeated addition.
When multiplying fractions, you can think of them in one of two ways - a multiplication of decimals(e.g. 0.25 * 0.5) or a multiplication of fractions (e.g. (1/4) * (1/2)).
In these cases, using the fractional representation would be a lot slower than using the decimal representation, since it involves an initial division step to convert them into floating point numbers that the compiler understands. Therefore it’s worth avoiding fractions where possible and using the decimal representation instead.
When it comes to multiplying matrices, the process (whilst complicated) can still be broken down to use only simple addition.
If you think about how to resolve each element in a matrix multiplication, it’s really just repeated applications of the vector dot product. The dot product in turn is just a bunch of multiplication and addition, and that multiplication can be broken down again to just pure addition.
So when you get right down to the basic operation, the whole thing is just an incredibly laborious sum of addition.
I hope that helps answer your questions.
P.S. Scribblenauts was a great game!