Pose Estimation
Overview

Pose Estimation node detects human keypoints (body joints) and full pose structures from the input frame. It can optionally classify the detected pose into a high-level state such as standing or sitting.
Use this node for ergonomics monitoring, activity recognition, or triggering rules based on body posture.
Input
Input Image
image requiredThe image frame to analyze. Connect this to a camera or upstream image output.
Model
string requiredPose estimation model to use.
Available values:
ONTFL_MOVENET_LIGHTNING_SINGLEPOSE— fastest; suited for real-time single-person detection.ONTFL_MOVENET_THUNDER_SINGLEPOSE— more accurate single-person model.ONTFL_MOVENET_LIGHTNING_MULTIPOSE— supports multiple people simultaneously.ONTF2_CENTERNET— CenterNet-based model.
Choose a model based on the number of people in frame and your speed/accuracy requirements.
Confidence Threshold
number requiredMinimum confidence score to keep a detected pose. See Confidence Threshold for tuning guidance.
Default: 0.1
NMS Threshold
number requiredNon-Maximum Suppression threshold for overlapping pose detections. See NMS Threshold for tuning guidance.
Default: 0.5
Analyze Pose
boolean requiredWhen enabled, the node evaluates each detected pose and assigns a high-level state label (for example standing, sitting, lying) based on keypoint geometry. The state is included in each result object.
Overlay Results
boolean requiredWhether to draw skeleton lines and joint points on the output frame. See Overlay Results.
Draw Label Text
boolean requiredWhen enabled, draws the name of each keypoint (for example leftShoulder, rightKnee) on the overlay.
Output
Overlay Image
imageOutput frame from the node. If overlays are enabled, detected skeletons and joint labels are drawn on this frame.
Detected Count
integerNumber of persons or poses detected in the current frame.
Detected Objects
arrayArray of pose result objects. Each object contains:
bboxarray: Bounding box[x, y, width, height]around the detected person.keypointsarray: Array of joint positions, each withx,y, andconfidence.poseLabelstring: Analyzed pose state label (for examplestanding). Only present whenAnalyze Poseis enabled.