Skip to main content

Add Custom Model

Here are the steps that can be followed to add a custom MobileNetSSD or TinyYolo model to the Nimble framework:

  1. Generate the OpenVino IR model if it doesn't exist already.

  2. Maintain the model and related files in a dir structure as shown, under ./deploy/models dir:

    .
    ├── CPU
    │   ├── mobilenet_ssd_based_custom_model
    │   │   ├── FP32
    │   │   │   ├── mobilenet_ssd_based_custom_model.bin
    │   │   │   ├── mobilenet_ssd_based_custom_model.xml
    │   │   └── labels.txt
  3. One of the following approaches can be taken to build pre- and post-processing support to the model being added:

    • If the model being added is similar to one of the models with already built-in support, then:

      • Find the modules in ./deploy/models/derived which can be used.
        Example: ./deploy/models/derived/PersonDetection.py can support MobileNetSSD models.
      • Modify the models member variable, in the chosen derived class implementation, to include support for the newly added model.
        For example:
        models = ["person-detection-0200", "person-detection-0201", "person-detection-0202", "mobilenet_ssd_based_custom_model"]
    • Alternately, add a python file with support to the pre- and post-processing stages of the model.

      A sample is shown here:

      from nimble.models.Detector import Detector
  class mobilenet_ssd_based_custom_model(Detector):
inference_type = "detection"
models = ["mobilenet_ssd_based_custom_model"]

@staticmethod
def preprocess(image):
return image

@staticmethod
def postprocess(data, params):
return data["detection_out"]
```

* The support class should derive from the ``Detector`` class as shown in the sample
* The support class should declare and initialize the member variable ``inference_type`` to either of the values:
* ``detection``: support object detection based models
* ``pose``: supports pose estimation based models
* The support class should declare and initialize the member variable ``models``, which is set to a string literal or a list of models (with the same name as the dir name in the ``/deploy/models`` dir) that this class supports.
* The support class should override the ``preprocess`` method with the correct logic to process the input for this custom model. The o/p of the `preprocess` method should be an image in the form of a numpy array with its dimension set to - ``[HxWxC]``, where
* ``H`` - image height or pixel rows
* ``W`` - image width or pixel columns
* ``C`` - color channels, usually 3 in the BGR color order
* The support class should override the ``postprocess`` method with the correct logic to process the output for this custom model. The o/p of the ``postprocess`` method should be a list of detections with each detection having the format ``[image_id, label, conf, x_min, y_min, x_max, y_max]``, where
* ``image_id`` - ID of the image in the batch. Since we default batch sizes to 1, this is always set to 0
* ``label`` - predicted class ID (For example: 0 - person)
* ``conf`` - confidence for the predicted class
* ``(x_min, y_min)`` - coordinates of the top left bounding box corner
* ``(x_max, y_max)`` - coordinates of the bottom right bounding box corner.
* The support class should be placed under the ``./deploy/models/derived`` dir
  1. Follow instructions in the pipeline stream element: inference section and refer to the sample configurations to setup a configuration invoking the newly added model.
  2. Replace the new configuration in the ./deploy/config.yaml file.
  3. Follow instructions to bring up the nimble container.