Nabrio Help
Nabrio Help

Getting Started

Nara Overview

Understanding Nara

Using Nara

Components

Common Process InputsBrightnessCircle Detection1D/2D Code ReaderColor DetectionColor ThresholdContrastMulti CropCropDetection Count ZonesFace DetectionFace RecognitionFeature MatchingFire & Smoke DetectionFlipGeneral Object DetectionImage ClassificationImage SimilarityKey Points DetectionNumber Plate ReaderObject DetectionOCRPose EstimationResizeRotateSaturationInstance SegmentationWatershed SegmentPolygon DetectionWhite Balance

Miscellaneous

Nomenclature
Troubleshooting
Notice and DisclaimerEULA
NodesProcess

Pose Estimation

Slot Usage: 3

Overview

Pose estimation example

Pose Estimation node detects human keypoints (body joints) and full pose structures from the input frame. It can optionally classify the detected pose into a high-level state such as standing or sitting.

Use this node for ergonomics monitoring, activity recognition, or triggering rules based on body posture.

Input

Input Image

image required

The image frame to analyze. Connect this to a camera or upstream image output.

Model

string required

Pose estimation model to use.

Available values:

  • ONTFL_MOVENET_LIGHTNING_SINGLEPOSE — fastest; suited for real-time single-person detection.
  • ONTFL_MOVENET_THUNDER_SINGLEPOSE — more accurate single-person model.
  • ONTFL_MOVENET_LIGHTNING_MULTIPOSE — supports multiple people simultaneously.
  • ONTF2_CENTERNET — CenterNet-based model.

Choose a model based on the number of people in frame and your speed/accuracy requirements.

Confidence Threshold

number required

Minimum confidence score to keep a detected pose. See Confidence Threshold for tuning guidance.

Default: 0.1

NMS Threshold

number required

Non-Maximum Suppression threshold for overlapping pose detections. See NMS Threshold for tuning guidance.

Default: 0.5

Analyze Pose

boolean required

When enabled, the node evaluates each detected pose and assigns a high-level state label (for example standing, sitting, lying) based on keypoint geometry. The state is included in each result object.

Overlay Results

boolean required

Whether to draw skeleton lines and joint points on the output frame. See Overlay Results.

Draw Label Text

boolean required

When enabled, draws the name of each keypoint (for example leftShoulder, rightKnee) on the overlay.

Output

Overlay Image

image

Output frame from the node. If overlays are enabled, detected skeletons and joint labels are drawn on this frame.

Detected Count

integer

Number of persons or poses detected in the current frame.

Detected Objects

array

Array of pose result objects. Each object contains:

  • bbox array: Bounding box [x, y, width, height] around the detected person.
  • keypoints array: Array of joint positions, each with x, y, and confidence.
  • poseLabel string: Analyzed pose state label (for example standing). Only present when Analyze Pose is enabled.

OCR

Previous Page

Resize

Next Page

On this page

OverviewInputInput ImageModelConfidence ThresholdNMS ThresholdAnalyze PoseOverlay ResultsDraw Label TextOutputOverlay ImageDetected CountDetected Objects