Facial Recognition Example
This demo is intended as an example of deploying facial recognition using the Nimble SDK.
You can find it under tools/facial_recognition
.
First you will need to make sure that your license manager is configured, and that you have the latest docker images:
docker-compose pull
Next build the loader
docker image:
docker-compose build
To run the example:
docker-compose up
Since this is a headless example you will need to copy the resulting file from the Nimble container.
First you need to wait for the video to be processed, when you see this output from the Nimble logs the file has finished processing:
nimble_1 | [XXXX-XX-XX XX:XX:XX,XXX :: INFO : SinkWorker ] FileSink Ended -- Released
Now copy the video file to you local directory:
docker cp $(docker ps -aqf "name=facial_recognition_nimble"):/home/nimble/tmp-0.mp4 .
Breakdown
The demo above is a pre-canned example based on an already existing embedding to identity mapping (data/embeddings.odgt
) and a sample video (data/sample.mp4
).
This breakdown aims to describe the steps necessary to:
- Extract an embedding dataset from a video source
- Create the embedding to identity mapping file:
embeddings.odgt
- Load the
embeddings.odgt
into the Redis in-memory database for use by Nimble.
Extracting the Embedding Dataset
note
If you already have an embedding to identity mapping but in a different format you can skip to here.
This section show how Nimble can be used to create an embedding dataset.
Nimble needs to be configured with the embedding sink along with the embedding creation pipeline, the configuration YAML (extract.yaml
) below is an example of this:
- pipeline:
- gpu:yolov5s-face:a:0.3:0.3
- demux
- face-alignment
- gpu:arcface
- mux
sinks:
- address: embeddings
filter: []
type: embedding
sources:
- address: videos/sample.mp4
rate: 30
type: file
First there is the pipeline, it localises the faces, performs alignment based on the landmarks and creates the embeddings.
The embedding sink creates a folder embeddings
which contains the embedding dataset.
Finally, the desired video sources
described.
For a detailed description on the individual elements please refer to the element descriptions in the Nimble SDK documentation.
While running the above pipeline you can extract out the embedding dataset in a separate terminal:
docker cp $(docker ps -aqf "name=facial_recognition_nimble"):/home/nimble/embeddings .
This will result in a directory structure similar to below:
embeddings/
└── 0
├── 0
│ ├── 0.emb
│ ├── 1.emb
│ ├── 2.emb
│ ├── 3.emb
│ ├── 4.emb
│ └── img.jpg
├── 1
│ ├── 0.emb
│ ├── 1.emb
│ ├── 2.emb
│ ├── 3.emb
│ ├── 4.emb
│ └── img.jpg
...
The first directory level is the channel ID, in this example it is 0
because we only have a single source.
The second directory level is for the individual frames; order from 0
to N
where N
is the number of frame captured so far.
Finally, each frame folder contains the frame itself as a .jpg
as well as a file .emb
for each detection.
Optionally the above pipeline can be modified to include the identification element:
- pipeline:
- gpu:yolov5s-face:a:0.3:0.3
- demux
- face-alignment
- gpu:arcface
- mux
- arcface-identifier:redis
...
This will result is additional candidate identification files (.cid
) created in the frame folder, these are not necessary but aid in the embedding to identity mapping process if they are available.
Creating the Embedding to Identity Mapping
The goal of this section to create the embeddings.odgt
.
A .odgt
is a file format that contains a valid JSON string on each line.
Each embedding within the embeddings.odgt
should be represented as the follow JSON:
{"id": <IDENTIFICATION>, "embedding": <EMBEDDING_VECTOR>}
In this case <IDENTIFICATION>
should be an utf-8 string and <EMBEDDING_VECTOR>
a list of floating point values.
If a <IDENTIFICATION>
has multiple <EMBEDDING_VECTOR>
's then it should be listed as a separate line item, nested lists of <EMBEDDING_VECTOR>
's is currently not supported by the loader
utility.
Therefore, a embeddings.odgt
should look like this:
{"id": "a", "embedding": [0.1, ..., 0.4]}
{"id": "b", "embedding": [0.2, ..., 0.3]}
{"id": "c", "embedding": [0.3, ..., 0.2]}
{"id": "d", "embedding": [0.4, ..., 0.1]}
{"id": "a", "embedding": [0.5, ..., 0.4]}
note
If you are bringing in your own dataset and creating the embeddings.odgt
yourself you can now skip to here.
As part of this demo, a script utils/assign_embeddings.py
has been provided to turn the embeddings
folder created previously into a embeddings.odgt
.
First, make sure the dependencies are installed:
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r utils/requirements.txt
Now run:
python utils/assign_embeddings.py --input_dir embeddings --output_file embeddings.odgt
The script pops-up a window of the first frame; click back on the terminal and start filling it out. The terminal should look similar to this:
No image found in embeddings, skipping...
No image found in embeddings\0, skipping...
Processing embeddings\0\0...
Name for face 0 (default: unknown): a
Candidates for face 1 found: a (0.66)
Name for face 1 (default: a): b
Candidates for face 2 found: a (0.60) b (0.81)
Name for face 2 (default: a): c
Candidates for face 3 found: b (0.63) c (0.70) a (0.73)
Name for face 3 (default: b): d
Candidates for face 4 found: b (0.43) c (0.62) a (0.66)
Name for face 4 (default: b):
The script goes through each channel and frame asking you to assign an identity to each face within the frame.
The faces are labeled 0, 1, 2, ...
for reference with the command line tool.
As each face is assigned the script will start to offer candidates for similar faces that you have labeled.
The candidate with the smallest distance (the number next to the label) is offered as the default, to assign the default just push Enter with no text.
Additionally, any .cid
files are also considered as candidates by the tool and are presented in the same fashion.
The script updates the embeddings.odgt
after each input, so you can stop labeling at anytime and use the embeddings.odgt
.
Loading the embeddings.odgt
into Redis
This final section focuses on loading the embeddings.odgt
in the Redis database.
When the demo is brought up via docker-compose
a lightweight container called loader
is started and populates the Redis database.
...
loader:
environment:
- REDIS_DB_HOST=redis
- REDIS_DB_PORT=6379
- NEARPY_PROJECTIONS=8
build: utils/.
image: embeddings_loader
volumes:
- ./data/embeddings.odgt:/embeddings.odgt
depends_on:
- "redis"
...
To use your own embeddings.odgt
simply replace data/embeddings.odgt
or updated the volumes
section.
tip
Before you run the system is strong recommended that you optimise the number of projections needed for your embeddings.odgt
.
This can be done by running utils/optimise_database_projections.py
python utils/optimise_database_projections.py --data embeddings.odgt
This will result in the optimal number of projections for your embedding dataset, for example running the script for the demo embeddings.odgt
:
$ python utils/optimise_database_projections.py --data data/embeddings.odgt
Best Projection: 8, Average Candidates: 10.720472440944881
With this you can update the docker-compose.yaml
with the new number:
...
loader:
environment:
- REDIS_DB_HOST=redis
- REDIS_DB_PORT=6379
- NEARPY_PROJECTIONS=<BEST_PROJECTION>
...
An incorrect number of projections can result in poor performance and accuracy for the approximate nearest neighbour(ANN) algorithm. Please refer to the element descriptions and NearPy for more details of ANN.