Skip to content

Instantly share code, notes, and snippets.

@KeitetsuWorks
Last active February 11, 2022 02:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save KeitetsuWorks/8713a6ddbb3ab220110338f26f3e35ef to your computer and use it in GitHub Desktop.
Save KeitetsuWorks/8713a6ddbb3ab220110338f26f3e35ef to your computer and use it in GitHub Desktop.
deepstream-test5-app + Apache Kafka + Node-RED

deepstream-test5-app + Apache Kafka + Node-RED

deepstream-test5-app

My environment

  • Ubuntu 20.04.3 LTS (amd64)
  • GeForce GTX 1070 Ti
    • Driver Version: 510.47.03
    • CUDA Version: 11.6
  • Docker version 20.10.12, build e91ed57
  • docker-compose version 1.29.2, build 5becea4c

Running under Docker

  1. Create the deepstream-test5-c-kafka-nodered directory

    $ cd /path/to/anywhere
    $ mkdir deepstream-test5-c-kafka-nodered
  2. Download the project files to the deepstream-test5-c-kafka-nodered directory

  3. Start the docker containers

    $ cd /path/to/deepstream-test5-c-kafka-nodered
    $ mkdir -p node-red/data
    $ docker-compose up -d
    Creating network "deepstream-test5-c-kafka-nodered_backend" with the default driver
    Creating network "deepstream-test5-c-kafka-nodered_frontend" with the default driver
    Creating zookeeper ... done
    Creating broker    ... done
    Creating nvds      ... done
    Creating node-red  ... done
    $ docker-compose ps
    Name                 Command                       State                            Ports                  
    -------------------------------------------------------------------------------------------------------------
    broker      /etc/confluent/docker/run        Up                      0.0.0.0:9092->9092/tcp,:::9092->9092/tcp
    node-red    npm --no-update-notifier - ...   Up (health: starting)   0.0.0.0:1880->1880/tcp,:::1880->1880/tcp
    nvds        bash                             Up                                                              
    zookeeper   /etc/confluent/docker/run        Up
  4. Create a topic to store deepstream-test5-app events

    $ docker exec -it broker /bin/bash
    [appuser@broker ~]$ kafka-topics --create --topic nvds-test5 --replication-factor 1 --partitions 1 --bootstrap-server broker:29092
    Created topic nvds-test5.
    [appuser@broker ~]$ kafka-topics --describe --topic nvds-test5 --bootstrap-server broker:29092
    Topic: nvds-test5       TopicId: n4js4K6dQT-YcapbIbrIlQ PartitionCount: 1       ReplicationFactor: 1    Configs: 
            Topic: nvds-test5       Partition: 0    Leader: 1       Replicas: 1     Isr: 1
    [appuser@broker ~]$ kafka-topics --list --bootstrap-server broker:29092
    nvds-test5
  5. Set up the Node-RED and create a Node-RED flow. See sections 2-2 and 2-3 in the article below (Japanese)

    Item Value
    Kafka Broker Name broker
    Port 29092
    Topic Name nvds-test5
  6. Run deepstream-test5-app.

    $ xhost +
    $ docker exec -it nvds /bin/bash
    root@nvds:/opt/nvidia/deepstream/deepstream-6.0# cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs
    root@nvds:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs# cp /data/test5_config_file_src_infer_kafka_nodered.txt .
    root@nvds:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs# deepstream-test5-app -c test5_config_file_src_infer_kafka_nodered.txt
version: "3.9"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
hostname: zookeeper
container_name: zookeeper
networks:
- backend
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.0.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
networks:
- frontend
- backend
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
node-red:
image: nodered/node-red:2.2.0-14
hostname: node-red
container_name: node-red
depends_on:
- nvds
ports:
- "1880:1880"
networks:
- frontend
- backend
environment:
- TZ=Asia/Tokyo
volumes:
- ${PWD}/node-red/data:/data
tty: true
nvds:
image: nvcr.io/nvidia/deepstream:6.0-devel
hostname: nvds
container_name: nvds
depends_on:
- broker
networks:
- frontend
- backend
environment:
- TZ=Asia/Tokyo
- DISPLAY=${DISPLAY}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- /var/run/docker.sock:/var/run/docker.sock
- ${PWD}:/data
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [ gpu ]
tty: true
networks:
frontend:
backend:
internal: true
################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=broker;29092;nvds-test5
topic=nvds-test5
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt
[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=deepstream-test5_output.mp4
source-id=0
# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>
# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1
[tests]
file-loop=0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment