Kinematics: Interpolation with Non-Linear Motor Moves
-
I'm not sure I understand Movement/DDA and segmentation, so bear with me:
- given all axes are 0
G1 X100 Y50
: it interpolates X 0..100 and Y 0..50 linearly with x-amount of segments based on motor positions (start -> end).
Now, my kinematics should not be interpolate motor positions linearly, for example:
- given all axes are 0: X, Y, Z, A, B
G1 B20
which moves Y, Z and B (tilting nozzle) like https://youtu.be/2x_rOThE_Ns- it should perform actually internally
G1 B1
,G1 B2
..G1 B20
and calculate my kinematics each time with non-linear Y, Z as provided byPAXKinematics::CartesianToMotorSteps()
G1 B20
right now is performed as such: gets from my kinematics the end position of the motors and then interpolate linearly from existing motor position Y, Z and B to reach that end motor position; which is not what I want as it produces wrong path, e.g. the nozzle touches the bed briefly which should not happen (other movements involvingA
andB
rotation the unwanted path is more severe)
- it should perform actually internally
How can I force Movement/Kinematics to actually interpolate machine position, not the motor position?
-
@xyzdims I don't think segmentation is motorPos based, because all scara kinematics need the segmentation and cartesian coordinates between the segments.
One possibility is you're using simulation mode, because it sets segmentation to 1. Another one is that your code sets segmentation to 1. There is a method in the Kinematics file which tells which axes are linear GetLinearAxes(), maybe this is set wrong.To decide whether it's the problem you expect or an error in kinematics code, you could run G1 B20 and another run with G1 B0, G1 B1, G1 B2 etc to G1 B20 and compare the two paths. They should match. Interesting is the behaviour of the Z and Y, maybe you can debug Z and Y when B is B5 and B15 eg.
-
@xyzdims for all nonlinear kinematics except linear delta, you need to use segmentation. This is just a matter of passing suitable segmentation defaults to the Kinematics base class constructor in your own kinematics class constructor. When the segments are small, linear interpolation within each segment works well enough.
-
@joergs5 It seemed to me as if it's motor position, because my kinematics is only called once with end position. The
GetLinearAxes()
doesreturn AxesBitmap::MakeFromRaw(0)
treating none of the axes as linear (or am I misunderstanding of it?) Otherwise I inherent the ZLeadscrewKinematics, where segmentsPerSecond=100, minSegmentLength=0.2000.@dc42 I did this:
PAXKinematics::PAXKinematics() noexcept : ZLeadscrewKinematics(KinematicsType::pax, SegmentationType(true, true, true)) // useSeg, useZSeg, useG0Seg { }
and as mentioned above,
GetSegmentsPerSecond()
returns 100, andGetMinSegmentLength()
returns 0.2.What am I missing otherwise? The
PAXKinematics::CartesianToMotorSteps()
is called only once withG1 B20
(to keep the example simple).I went ahead to understand it better what the problem could be, and dived into
Gcodes/Gcodes.cpp
functionGCodes::DoStraightMove()
: where the amount of segments are calculated, line #2110:// Apply segmentation if necessary. To speed up simulation on SCARA printers, we don't apply kinematics segmentation when simulating. // As soon as we set segmentsLeft nonzero, the Move process will assume that the move is ready to take, so this must be the last thing we do. if (st.useSegmentation && simulationMode != 1 && (moveBuffer.hasPositiveExtrusion || moveBuffer.isCoordinated || st.useG0Segmentation)) { debugPrintf("calculate segmentation\n"); // This kinematics approximates linear motion by means of segmentation float moveLengthSquared = fsquare(currentUserPosition[X_AXIS] - initialUserPosition[X_AXIS]) + fsquare(currentUserPosition[Y_AXIS] - initialUserPosition[Y_AXIS]); if (st.useZSegmentation) { moveLengthSquared += fsquare(currentUserPosition[Z_AXIS] - initialUserPosition[Z_AXIS]); } const float moveLength = fastSqrtf(moveLengthSquared); const float moveTime = moveLength/moveBuffer.feedRate; // this is a best-case time, often the move will take longer moveBuffer.totalSegments = (unsigned int)max<long>(1, lrintf(min<float>(moveLength * kin.GetReciprocalMinSegmentLength(), moveTime * kin.GetSegmentsPerSecond()))); }
it enters that
if
block, but the result iscalculate segmentation moveLength = 0.0000, moveTime = 0.0000, kin.GetReciprocalMinSegmentLength = 5.0000, kin.GetSegmentsPerSecond = 100.0000 mb.totalSegments = 1, mb.isCoordinated = 1, useSegmentation = 1, simulationMode = 0
mb = moveBuffer
Now, I did override
moveBuffer.totalSegments = 10
(after the quotedif
block) and now the motor moves exactly as I expected.At my superficial glance of that particular
if
block, the axesA
&B
are disregarded there, so, I added in haste following lines (indicated by ** in front):if (st.useZSegmentation) { moveLengthSquared += fsquare(currentUserPosition[Z_AXIS] - initialUserPosition[Z_AXIS]); } **moveLengthSquared += fsquare(currentUserPosition[3] - initialUserPosition[3]); **moveLengthSquared += fsquare(currentUserPosition[4] - initialUserPosition[4]);
A[axis=3] and B[axis=4] are in degrees, so it's not really "length", but anyway, it produces this debug output:
calculate segmentation moveLength = 45.0000, moveTime = 0.1350, kin.GetReciprocalMinSegmentLength = 5.0000, kin.GetSegmentsPerSecond = 100.0000 mb.totalSegments = 13, mb.isCoordinated = 1, useSegmentation = 1, simulationMode = 0
and now I have much better
mb.totalSegments = 13
.@dc42 am I missing some settings in my kinematics or is the segmentation computation problem actually higher up at
Gcodes/Gcode.cpp
?With the two axes (A & B) hardcoded into segments calculation the firmware & machine behaves now as I desired https://www.youtube.com/watch?v=TiV0zEc5spw - the nozzle stays in place, A & B are rotating, whereas X, Y & Z compensate properly.
-
@xyzdims your analysis is interesting and imho correct. But there is something strange additionally, because even if only calculating the distance for x and y, as Y moves with B20, the value for moveLength should be > 0. currentUserPosition[Y_AXIS] is probably not updated.
float moveLengthSquared = fsquare(currentUserPosition[X_AXIS] - initialUserPosition[X_AXIS]) + fsquare(currentUserPosition[Y_AXIS] - initialUserPosition[Y_AXIS]); should be > 0.
Update:
Probably the reason is
totalDistance = NormaliseLinearMotion(reprap.GetPlatform().GetLinearAxes());
in DDA. When there are no linear axes by AxesBitmap::MakeFromRaw(0), totalDistance is 0. (I was not aware that this method is so important...) And totalDistance is used to calculate the current position, speed etc.It's interesting that in the case of G1 B20, totalDistance is correctly 0, but for other reasons. B, Y and Z movements nullify themselfes, there is no movement of the nozzle. But this is not due to the code. In DDA, linear and rotational axes totalDistance are calculated separately as if-else.
-
@xyzdims I see what is happening, it is calculating segmentation based on XYZ movement only. That isn't sufficient for your kinematics when you do a move involving the rotary axes but little or no XYZ movement.
-
@joergs5 Interesting observation of yours - it really helps me to understand RRF better, thanks for that. I need to ponder on this all some more as I can't see what the proper solution is.
-
@xyzdims I return the compliment: I have never dived that deeply into the topic of segmentation before. A classical win-win
There is a saying: the truth is in the code. That's why I increasingly read the code additionally to the documentation.
-
It's interesting that in the case of G1 B20, totalDistance is correctly 0, but for other reasons. B, Y and Z movements nullify themselfes
I thought about my sentence above. The DDA code calculated totalDistance to be > 0, which will result in extrusion. But in reality, totalDistance is 0 and there should be no extrusion, otherwise you get an additional clump. The reason for > 0 is, rotationalAxis and linearAxis are handled separately in DDA, but imho the totalDistance should be calculated for all "moving/rotating, non extrusion" axes. But firmware needs to know all angles of the axes (linear and rotational) to calculate it, eg settings in config by G-Code. An alternative is to introduce totalDistance to be calculated in a kinematics specific method, this can be used to resolve the LimitSpeedAndAccel method also.
-
-
@dc42 I'll recheck, thank you for your hint.
-
@dc42 yes, saw it; I didn't want to impose my haste solution (better would be loop over numVisibleAxes to calculate moveLength), and your response kind of indicated that you ponder on a proper solution. I keep my haste patch for now until you come up with something proper.
Update: there might be a requirement to transform degrees to moveLength somehow (a multiplier would be sufficient - maybe not on a second thought) provided from underlying kinematics using rotary axes; but I'm not fluent enough in C++ to do this properly.
-
@xyzdims I am thinking that as well as a parameter in Kinematics for segmenting linear movement, we need a parameter for segmenting rotary movement and specifying minimum segment length in degrees. Or perhaps move the code that calculates segmentation into the Kinematics class so that it can be overridden. I will think about this when I have time. Feel free to remind me a week from now.
-
@dc42 I thought about it the past days, I definitely like to hear @JoergS5 thoughts as well, as his kinematics has more rotary axes than mine.
So, for my case there are just two rotary axes, Z rotation (A) and tilt (B), which means, that depending on angle of B, the segmentation of A is determined:
- 0° B angle => A rotation segments = 1
- 90° B angle => A rotation segments = max
- 180° B angle => A rotation segments = 1
which leads me to something like
=> A rotation segmentation = ( sin(B angle) * A angle-delta * A seg-factor ) + 1(This formula disregards when B is also rotating together with A; I likely have to calculate it twice or more with B angle-start and B angle-end)
Segmenting for B rotation is not depending on other axes, therefore:
=> B rotation segmentation = B angle-delta * B seg-factorIn order to determine the segmentation of a single rotary axis properly, I need to know the other positions and angles (start & end) too, so those need to be passed on or made accessible in the method/function.
In a way, the actual movement length to calculate amount of segments I cannot determine from machine position delta, but from motor position delta, that is consulting
CartesianToMotorSteps()
. As I understand the current state of code ofGcodes.cpp: GCodes::DoStraightMove()
these are machine positions.So conceptually, any actual segmentation must be calculated from actual motor position delta, because at the end of the day, the motors move, and the segmentation is a way to optimize those. The machine position is conceptually higher and more abstract and there I can't really know the kinematics and how much the individual motors really move - in linear context machine pos ~ motor pos, but in non-linear context with more complex inverse kinematics, this is not the case anymore.
@dc42 It took me a bit longer to get back to this, I'm also not sure of the best way to proceed.
To sum up [my current state of conclusion]: the segmentation of (motor-) movements needs motor positions (not machine positions), and the Kinematics class can provide that, and optimize certain cases (as I pointed out near the beginning of this reply).
-
Adding to my own reply, the hidden issue I struggle with is, machine pos are real absolute positions [e.g. mm] of the tool tip, where as motor pos are arbitrary units again (microsteps etc), yet, segmenting is optimizing ridiculous short movements [in mm], yet, motor pos gives me mechanical lower bound, but not information about absolute movement as of [mm] - so, what does segmenting really do?
Does it define shortest possible movement motor-wise, or defined highest resolution of the motion of the tool tip?
-
@xyzdims said in Kinematics: Interpolation with Non-Linear Motor Moves:
what does segmenting really do
I currently cannot read your other questions, but about segmentation, I know that movements are segmented into short straight segments. Analogous like a circle can be approched by many short straight lines to calculate pi. You have control how many segments to use. The more, the better approaching the curve, but more processing needed and possibly jerks between the segments. (RRF tries to smooth speed between the segments, but I don't know how good). Segmentation is not only needed to approach the line, but the angular speed changes with position, so extrapolating the angle speed withouth segmentation would leed to an incorrect result (straight line being printed as curve).
-
@xyzdims said in Kinematics: Interpolation with Non-Linear Motor Moves:
I need to know the other positions and angles (start & end) too,
If you really need it (and is not supported by firmware), a trick could help you, which I used: when a new move starts, the method LimitPosition is called once. I store the planned path in this method, because the method gets initialCoords and finalCoords. But G2/G3 is empty for initialCoords (you can take the finalCoords of the move before). I needed to declare the path variables as mutable.
BTW this trick is not good design, but I wanted to avoid changing something in the main code, and wanted to use code only in the kinematics class.
-
@dc42 I think I have a conceptual solution for my dilemma:
RRF right now (as of 3.4) as far I understood:
- machine position [mm]
- motor position [microsteps]
but I need to think, as for 5-axis PAX kinematics, in 3 layers:
- machine position [mm] - that is what G-code states
- motor position [mm] - that what motors have to do in [mm] after inverse kinematics calculated
- motor steps [microsteps] - that's the actual steps to perform
Right now I do 2 & 3 in
CartesianToMotorSteps()
, which isn't ideal, and the trick withLimitPosition()
as @JoergS5 mentioned to get start & end machine position helps a bit.I think in the long term RRF might introduce the layer 2), and make a hard distinction of layer 1, 2 and 3.
The segmentation count and length in my opinion must be calculated from 2), and consider 3) as lower-bound. If segmentation is only calculated at 1) level, then the other involved motor moves as part of inverse kinematics cannot be determined or
CartesianToMotorSteps()
needs to be called (where inverse kinematics is calculated) but with motor steps given back that's not sufficient details or is it?In a nutshell, machine movements do not reflect actual motor movements, no assumption can be made outside of Kinematics which motor actually have to move and how much (!!) - therefore such layer 2) might be worth to introduce in the long term, e.g.
Kinematics::CalculateMotorPosition()
.From top-down:
Gcodes::DoStraightMove()
: Gcode positions in [mm]Kinematics::LimitPosition()
Kinematics::CalculateMotorPosition()
[new] transforming "tool or machine position" to "motor position" yet still in [mm], e.g. applying inverse kinematics, if not implemented, machine position [mm] => motor position [mm]Kinematics::MotorPositionToMotorSteps()
(formerly known asCartesianToMotorSteps()
), calculating [mm] into [steps], if not implemented simply domotorPos[axis] * stepsPerMm[axis]
DDA::*()
: deals with actual steps / time
or in short:
MachinePosition
[mm] ->MotorPosition
[mm] ->MotorSteps
[steps]Anyway, these are my thoughts based on my limited comprehension of RRF.