Nabrio Help
Nabrio Help

Getting Started

Nara Overview

Understanding Nara

Using Nara

Components

Common Process InputsBrightnessCircle Detection1D/2D Code ReaderColor DetectionColor ThresholdContrastMulti CropCropDetection Count ZonesFace DetectionFace RecognitionFeature MatchingFire & Smoke DetectionFlipGeneral Object DetectionImage ClassificationImage SimilarityKey Points DetectionNumber Plate ReaderObject DetectionOCRPose EstimationResizeRotateSaturationInstance SegmentationWatershed SegmentPolygon DetectionWhite Balance

Miscellaneous

Nomenclature
Troubleshooting
Notice and DisclaimerEULA
NodesProcess

OCR

Slot Usage: 3

Overview

OCR example

OCR node detects text regions in an input frame and recognizes the characters within those regions.

Use this node to extract printed or handwritten text from documents, labels, signs, or displays. The node runs a two-stage pipeline: a detection model first locates text bounding boxes, then a recognition model reads the characters inside each box.

Input

Input Image

image required

The image frame to process. Connect this to a camera or upstream image output.

Detection Model

string required

Model used to locate text regions in the frame.

Values:

  • DB (default) — Differentiable Binarization; accurate and suitable for most printed text scenarios.
  • EAST — fast scene-text detector; works well for horizontal text in natural scenes.
  • TESSERACT — uses Tesseract for both detection and recognition in one step; set Recognition Model to TESSERACT when using this.
  • NONE — skips detection and passes the entire frame to the recognition model directly.

Recognition Model

string required

Model used to read characters from each detected text region.

Values:

  • CRNN (default) — Convolutional Recurrent Neural Network; accurate for printed text.
  • TESSERACT — Tesseract OCR engine; use together with Detection Model: TESSERACT for an all-Tesseract pipeline.

Overlay Results

boolean required advanced

Whether to draw text region boxes and recognized text on the output frame. See Overlay Results.

Draw Lines / Text / Confidence

boolean

Fine-grained controls for what is included in the overlay when Overlay Results is enabled: region outlines, recognized text strings, and/or confidence scores.

Detection Tuning

number array

Additional detection parameters that appear based on the selected Detection Model:

  • defConfThresh — default confidence threshold for text region acceptance.
  • boxWRange and boxHRange — [min, max] pixel size filters; boxes outside the range are discarded.
  • DB-specific: binThresh (binarization threshold), polyThresh (polygon contour threshold), unclipRatio (controls text region expansion).

Output

Overlay Image

image

Output frame from the node. If overlays are enabled, text regions and recognized strings are annotated on this frame.

Detected Texts

array

Array of recognized text objects. Each object contains:

  • bbox or quadrilateral — the bounding region of the detected text.
  • text — the recognized string.
  • confidence — recognition confidence score.

Object Detection

Previous Page

Pose Estimation

Next Page

On this page

OverviewInputInput ImageDetection ModelRecognition ModelOverlay ResultsDraw Lines / Text / ConfidenceDetection TuningOutputOverlay ImageDetected Texts