How does multiplying the rotation help us find the end of the line?

Hello Forum,

I’ve just finished this lecture and while I understand most of what was discussed, I am struggling to understand some of the math.

In this line:

FVector LineTraceEnd = PlayerViewPointLocation + (PlayerViewPointRotation.Vector() * Reach);

It appears that we are converting the rotation of the player to an X,Y,Z value and then multiplying it by the value of “Reach” (100cm). This is what I don’t understand, how does multiplying the rotation of the player by 100 help us determine the end of our debug line?

If someone could please break this down for me I would really appreciate it as it has me scratching my head.

Cheers.

PlayerViewPointRotation.Vector() Should be the unit vector, i.e. it has a length of 1 and will be in the direction of the trace. timesing it by the reach should give the desired length to it.

PlayerViewPointRotation.Vector() isn’t really a rotation, but a vector pointing in the “forward” direction taking into account the player’s rotation.

FVector LineTraceEnd = PlayerViewPointLocation + (PlayerViewPointRotation.Vector() * Reach);

PlayerViwPintLocation and PlayerViewRotation are (X.Y.Z) coordinate and how can they addition?
it’s simple math addition or something other?

Look up “Vector maths”

thank :slight_smile:

Todd gave a good explanation; I thought I’d elaborate for anyone who is still confused. Think of it this way: we originally had a vector pointing straight above the player’s head. Next, we decided we wanted it to point in the direction the player was looking instead of directly above them, but we couldn’t simply change the vector from pointing in the Z (vertical) direction to the X or Y direction–picture what would happen if the player turned his head and no longer looked directly down either of those directions: the vector would still be pointing in the X or Y direction, regardless of where the player looked. The beam has to rotate with the player!

To solve it, we decide to attach the vector to the player’s rotation, i.e., when the player looks left, the vector follows it, but we have to convert from spherical coordinates (what I assume FRotator is storing) to ordinary Cartesian coordinates (X, Y, Z) since we need to add it to the player’s position (which is in Cartesian coordinates as well) to make the beam/vector follow it. This is what .Vector() does; it just attempts to convert whatever you’re giving it into a vector that can be represented in basic X, Y, Z coordinates. So, at every instance, the game says here’s where the player is looking (how they’ve rotated), and now we’ll point that beam in the same direction as well, all based off of how they’ve rotated themselves in space!

Converting between Cartesian and spherical coordinates is very simple mathematically. It’s not necessary to understand how it’s done to understand what we’re doing here, but if you know a little bit of algebra it’d be an easy thing to figure it. There are three formulas and that’s all you need for it.

Yes and thank you for answer but my question was another
we change Spherical coordinates to Cartesian coordinate by the following line
PlayerViewPointRotationn.Vector()
and get something like this
PlayerViewPointRotation(X,Y,Z) with Cartesian coordinates
and in the code we have two Cartesian coordinates
PlayerViewPointLocationn(X,Y,Z) + PlayerViewPointRotationn(X,Y,Z)
how would gather two Cartesian cooridnates?

One vector keeps track of the player’s position, the other the end of the “grabber.” To clarify, these are vectors which are described by their coordinates. Vectors have a size and direction, and they add component-wise. You add them by taking the sum of their x-components, y-components, and z-components, resulting in a new vector, which will also be described in Cartesian coordinates.

For example, take two vectors a = <1,2,3> and b = <1, 1, 1>, where each index corresponds to x, y, and z respectively (for a, the x-component is 1, the y-component is 2, the z-component is 3). Summing them results in a + b = <2, 3, 4>, where the x-component (how far along it is in the x-direction) is 2, the y-component is 3, and the z-component is 4.

thank you for explanation :slight_smile:

Just adding one more thing. There are vectors and there are scalars. Scalars are a magnitude. Vectors are direction and a magnitude. Our viewpoint rotation is ONLY a direction. When the viewpoint rotation is converted to vector, it is given a magnitude of 1cm. We can then multiply that vector by a scalar(Reach) to extend it since scalars only deal with magnitude and do not affect direction. This is the same as multiplying the viewpoint vector by a vector of the same direction, but with a greater magnitude. Much simpler the first way.

1 Like

Privacy & Terms