Pipeline Stream Element

A Pipeline is an order dependent list of elements or operations. Pipeline elements contain their configurable parameters as ‘:’ delimiters arguments. In the list below list f refers to a floating point value, s refers to a string, e refers to an enum, l refer to a list and i refers to an integer. The following types of pipeline elements are supported

Inference

  • <s-hw_mode>:<s-model_name>:<s-mode>:<f-score_threshold>:<f-iou_threshold> : a deep learning model can be invoked by this element. The following configurable params are supported:

    • s-hw_mode: valid hardware mode {cpu:igpu:gpu:fpga}.

    • s-model_name: valid model names as listed in Megh’s model zoo directory. Please refer to the model section.

    • s-mode: Mode refers to each ‘s’ latency mode, or ‘a’ throughput mode.

    • f-score_threshold: The score thresholds {value between 0-1}.

    • f-iou_threshold: The iou thresholds {value between 0-1}.

Analytics

The following analytics use-cases are supported currently. All of them follow the object detection inference element in the pipeline and operate on the bounding boxes. Multiple use cases can be chained back to back.

  • ppe-compliance : flags any ‘person’ bounding box that doesn’t have an associated ‘hardhat’ and ‘vest’ bounding box.

  • social-distancing:<f-threshold> : apply the social distancing algorithm to the list of detections. The following configurable params are supported:

    • f-threshold: object nearness threshold {value between 0-1}.

  • intrusion-detection:<s-zone_config_path>:<s-region_to_check>:<f-ratio> : applies intrusion detection in a marked zone for the list of detections. The following configurable params are supported:

    • s-zone_config_path: this relative path to the json file marking the violation zones. Please refer to the tools section on steps to generate this file.

    • s-region_to_check: the region of the bounding box that should overlap for the intrusion to be registered {top:bottom:left:right}

    • f-ratio: the ratio of the bbox to be considered in the marked perspective for intrusion to be detected {value between 0-1}

  • people-tracking:<i-max_age>:<i-hist_len>:<i-sampling_frequency> : applies tracking on bounding boxes detected. The following configurable params are supported:

    • i-max_age: the number of consecutive iterations for which the tracker needs to be maintained for any object after it has stopped being detected.

    • i-hist_len: the max length (in number of points) for which the tracking history can be retained. The larger this value the longer the trace for any detected box.

    • i-sampling_frequency: the frequency at which the centroid coordinates of the tracker are appended to the history list

  • BEV:<i-ROI>:<i-dimensions> : applies Bird’s Eye View transform on bounding boxes detected. The following configurable params are supported:

    • i-ROI: the Region of Interest where BEV transform will be applied. This has to be a quadrilateral with atleast two opposite sides defined being parallel in real life (A list of 8 coordinates x1,y1,x2,y2…y4)

    • i-min_hits: the dimensions of the image (list of integers : size 2)

  • line-crossing:<s-line_crossing_json_path>:<l-inside_direction>:<i-min_hist>:<s-region_to_check>:<f-ratio> : applies violation of a marked line as a boundary. This use case also gives out an upcount and downcount for every boundary defined based on the direction in which detected people cross the boundary. The following configurable params are supported:

    • s-line_crossing_json_path: this relative path to the json file marking the violation lines. Please refer to the tools section on steps to generate this file.

    • l-inside_direction: a list marking orientations of the region of interest. a valid value of {-1:1} chained in a list Ex: [1,-1,1] with each value mapping to a specific violation line defined. This value tells the algorithm which side of the line is to be considered as occupied

    • i-min_hist: the min length of tracking history (in number of points) for which intersection with violation_line is checked.

    • s-region_to_check: the region of the bounding box that should overlap for the violation to be registered {top:bottom:left:right}

    • f-ratio: the ratio of the bbox to be considered in the marked perspective for violation to be detected {value between 0-1}

    Note:

    • The functionality of the line-crossing stage depends on inputs from the people-tracking stage. Please make sure that the people-tracking stage always precedes the line-crossing stage while building your pipeline.

    • It is recommended to draw the violation_line at the bottom of entrance/exit to get more accurate results.

  • fire-detection:<i-mxROF>:<i-dim1>:<i-dim2> : detects and returns a bounding rectangle around the fire. The following configurable params are supported:

    • i-maxROF: The number of fire regions to be detected per frame {default:1}

    • i-dim1: The width to which the input frame is reduced to {default:640}

    • i-dim2: The height to which the input frame is reduced to {default:480}

    Note:

    • This algorithm is limited to frame resize values of 640x480. Please do not specify any higher resolutions.

    • Please make sure that the input frames are > 640x480 in dimension. This is again a limitation of the current algo.

  • collision-detection : detects collisions and near-miss scenarios between vehicles. Returns two boolean flags per detection for the same, along with an aggregated collision and near miss count.

    Note:

    • This algorithm takes in detections of the following object classes - ‘person’, ‘car’ and ‘bicycle’

    • The algorithm only detects the occurrence of collision between vehicles, it does not perform any prediction.

    • This algorithm uses change in acceleration of vehicles as a primary threshold for collision, hence it will not work in scenarios where collisions occur without acceleration.

    • A near-miss indicates that the vehicle’s change in acceleration is above a certain threshold. This condition is checked for all vehicles, independent of other vehicles.

  • smoke-fire-detection : counts every object detected by fire-smoke model per frame.

Other

  • passthrough : No OP

  • image-encoder:<s/l-item_name(s)> : Encode the input image as JPEG

    • s/l-item_name(s) : Optional – flag particular items with a label or analytics key, you can pass either in either a string or a list of strings.

  • draw-pose : Annotates the image with any pose data that is present in the meta data.

  • image-resize:<f-scale> : resize the image with

    • f-scale: configurable floating point scale (>0 and <1)

  • blur-item:<s-item_name><e-circle/box> : blur particular item with a shape.

    • s-item_name: a particular label in the output image

    • e-circle/box: configurable area to be blurred {circle:box}

  • flag-item:<s/l-item_name(s)>:<s/l-flag_labels> : flag particular items with a label,

    • s/l-item_name(s) : either * , a string or a list of strings for the particular labels you want to flag.

    • s/l-flag_labels : either a string that applies to all labels, or you can pass in a list that will perform a one-to-one flagging with the item list.

  • filter-item:<s/l-item_name(s)> : filter particular items with a label.

    • s/l-item_name(s) : either * , a string or a list of strings.

Models

This as a list of currently supported models, please use these inplace of <s-model_name> for inference stages.

  • CPU/iGPU :

    • efficientdet-d0

    • efficientdet-d1

    • efficientdet-d2

    • efficientdet-d3

    • efficientdet-d4

    • efficientdet-d5

    • efficientdet-d6

    • efficientdet-d7

    • face-detection-0200

    • human-pose-estimation-0002

    • person-detection-0200

    • person-detection-0201

    • person-detection-0202

    • person-vehicle-bike-detection-crossroad-0078

    • ppe-detection

    • yolo-v2-tiny

    • yolo-v3-tiny

    • yolo-v3-tiny-fire-smoke

    • yolo-v3-tiny-mask