Custom Element
Nimble supports several built-in pipeline elements and also offers Megh-provided pipeline elements to implement its suite of professional use cases. However, the primary design philosophy is about providing an "Open Analytics" framework - a platform that is open, flexible, and customizable. Therefore, Nimble provides a direct and easy way to build, integrate, and run custom pipeline elements which enables customers to perform their own unique processing on pipeline metadata.
This is a complete guide for learning how to develop custom pipeline elements.
Element Manifest
Every pipeline element must define an element manifest - a YAML document that defines the high-level metadata for the element. This allows Nimble to scan for elements and populate the element catalog without actually loading any code modules into memory.
Schema
This section is a partial overview of the properties commonly found in an element manifest. For a complete specification, download the official JSON Schema for the manifest format: Element Manifest Schema
name
(required)
The unique registered name of the pipeline element. This is the name used when specifying elements for a pipeline.
title
Human-readable title (display name) of the pipeline element.
If omitted or empty, then the name
of the pipeline element should be used as a fallback.
description
Human-readable description of the pipeline element.
type
The type of code module that implements the element. Currently, the only type of code module supported is "python" (the default), but Nimble may support alternative programming languages in the future.
entrypoint
(required)
The file path to the code module containing the entry point for the element. Relative paths are resolved from the directory of the manifest file.
Technically, a single code module may contain multiple elements,
so the registered name
is used to identify the correct element implementation within the code module.
params_schema
A JSON Schema (Draft-07)
that defines the parameters that the element accepts during initialization.
When args
are specified for an element at runtime (see Element Initialization),
the params_schema
is used to validate the correctness of those arguments before using them for element initialization.
If this property is omitted, the element does not accept any initialization parameters.
Wondering why it's called a "params" schema when it validates the "args" of an element?
In short, an argument is the actual value provided at runtime whereas the parameter is just the named placeholder or variable for the argument. The parameter schema defines the names and types of the initialization parameters for an element.
References:
config_schema
A JSON Schema (Draft-07) that defines the JSON that the element accepts as configuration. If this property is omitted, the element does not support accepting a configuration. See Element Configuration.
This schema only validates the mapping values of the config
property, not the mapping itself:
{
"name": "element-name",
"args": {
"key": "value"
},
"config": {
"": {
"key": "value"
},
"0": {
"key": "value0"
},
"1,2": {
"key": "value12"
}
}
}
input_schema
A JSON Schema (Draft-07) that defines the JSON metadata that the element requires as input. If this property is omitted, the element places no requirements on the input metadata.
output_schema
A JSON Schema (Draft-07) that defines the JSON metadata that the element outputs. If this property is omitted, the element places no requirements on the output metadata.
If the output_schema
is identical to the input_schema
, the string "input"
can be used to avoid duplicating the schema:
input_schema:
type: object
properties:
key:
type: string
enum: [value]
required:
- key
output_schema: input
Example
Line-crossing element manifest:
name: line-crossing
title: Line Crossing
description: Evaluates bbox intersection with a defined line as a boundary.
arch:
- amd64
entrypoint: LineCrossing
params_schema:
type: object
properties:
directions:
description: List marking orientations of the region of interest.
type: array
items:
type: integer
enum: [-1, 1]
minItems: 1
history:
description: The min length of tracking history (in number of points) for which intersection with the line is checked.
type: integer
minimum: 0
maximum: 50
default: 10
region:
description: The region of the bounding box that should overlap for the line crossing to be registered
type: string
enum: [left, top, right, bottom, none]
default: bottom
ratio:
description: The ratio of the bbox to be considered for line crossing to be detected
type: number
minimum: 0.0
maximum: 1.0
default: 0.2
required:
- directions
config_schema:
type: object
properties:
lines:
type: array
items:
type: object
properties:
name:
type: string
coords:
type: array
minItems: 2
maxItems: 2
items:
type: object
properties:
x:
type: number
minimum: 0.0
maximum: 1.0
y:
type: number
minimum: 0.0
maximum: 1.0
required: [x, y]
id:
type: string
default: ""
required: [name, coords]
input_schema:
type: object
# Rest of the schema omitted for brevity
output_schema:
type: object
# Rest of the schema omitted for brevity
Nimble Element SDK
Nimble exposes a class-based SDK for creating custom pipeline elements. The SDK also provides some utility functions for common operations.
This section assumes you have an element manifest defined as:
name: custom
description: Perform some custom processing of pipeline metadata at a certain quality
entrypoint: CustomElement
params_schema:
type: object
properties:
quality:
type: string
enum: [low, medium, high]
default: medium
config_schema:
type: object
properties:
threshold:
type: number
minimum: 0.0
maximum: 1.0
default: 0.5
As of now, Nimble only has a Python SDK, but support for other programming languages may be added in the future.
Python SDK
Derive from Element
Import the Element
type from the Nimble Element SDK:
from nimble.pipeline.Element import Element
Create your custom element type and initialize the class variable name
to the registered name as it is in the manifest:
class MyCustomElement(Element):
name = "custom"
def __init__(self, quality: str = "medium"):
super().__init__()
self.quality = quality
The name of the class type for your custom element does not matter,
it's the file name of the code module that is specified as the entrypoint
in the manifest (without the file extension).
The __init__
constructor must accept ONLY keyword arguments (kwargs) according to the params_schema
defined in the element's manifest.
Otherwise, the init_args_to_kwargs
static class method may be overridden as a way to modify/transform the args
dictionary before they are passed to __init__
.
class MyCustomElement(Element):
name = "custom"
def __init__(self, quality_level: str = "medium"):
super().__init__()
self.quality_level = quality_level
@staticmethod
def init_args_to_kwargs(**args) -> dict:
return {
"quality_level": args.get("quality", "medium")
}
Implement Process Function
The process
method is invoked by the Nimble execution engine for each pipeline metadata frame.
...
from nimble.pipeline.PipelineMetadata import PipelineMetadata
class MyCustomElement(Element):
...
def process(self, meta: PipelineMetadata) -> PipelineMetadata:
# Custom code goes here
return meta
The method receives a single PipelineMetadata
argument meta
, which is a data object with the following properties:
from numpy import ndarray
class PipelineMetadata:
source_id: int # read-only
pipeline_id: int # read-only
channel_id: int # read-only
timestamp: float | None # read-only
start_time: float # read-only
latency: float # read-only (computed from `start_time`)
original_image_width: int # read-only
original_image_height: int # read-only
image_width: int # read-only (computed from `image`)
image_height: int # read-only (computed from `image`)
image: ndarray | None
jpeg: bytes | None
json: dict
auxiliary: dict
The json
dict is read/write and it's meant to be manipulated by pipeline elements.
Inference data will be populated for each inference type by a previous infer
element:
{
"detections": [...],
"poses": [...],
"classifications": [...],
"embeddings": [...]
}
If the process
method needs to access the runtime configuration, the base Element
class provides a get_config
method:
...
class MyCustomElement(Element):
...
def process(self, meta: PipelineMetadata) -> PipelineMetadata:
per_channel_config = self.get_config(meta)
# Custom code goes here
return meta
The get_config
method accepts a PipelineMetadata
argument because the runtime configuration can be different for different channels.
See Element Configuration.
Expose Static Metadata (optional)
By default, the runtime configuration is private to the pipeline element.
However, the pipeline element can override the get_static_metadata
method to expose the runtime configuration
to the channel's static metadata endpoint,
or anything else for that matter as long as it's JSON-compatible.
...
from nimble.api.formatters.Formattable import Formattable, IdentityFormattable
class MyCustomElement(Element):
...
def get_static_metadata(self, channel_id: int) -> Optional[Formattable]:
config = self.get_config(channel_id)
return IdentityFormattable(config) if config else None
Nimble's Formatters feature is deprecated, so this method's return type will change to a dict
when that feature is removed.
Discovery
When Nimble starts up, it scans the file system for element manifests in specific directory locations. One of those directory locations is the built-in internal elements directory. The other directory location is configurable by the user through environment variables:
NIMBLE_DATA_DIR
: Overrides the base directory for runtime data (models, elements, etc). Default:Current Working Directory
NIMBLE_ELEMENT_DIR
: Overrides the directory for element data. Default:${NIMBLE_DATA_DIR}/elements
Nimble will scan the directory specified by NIMBLE_ELEMENT_DIR
recursively for element manifests,
perform some compatibility filtering, and populate the element catalog.
Invocation
Once a custom pipeline element has been discovered by Nimble, the element can be used in a pipeline in the same way as any other built-in or Megh-provided element.
See the Elements documentation in the API section for how to use elements to create a pipeline.