- Overview
- Getting Started
- AI and Analytics
- Video Settings
- Advanced Settings
- Uninstall, Factory Reset and Troubleshooting
- Contact and Additional Information
- Demo License and Terms
- Additional Resources
Brinq Smart Camera Software provides everything needed to deploy advanced, real-time video analysis, powered by the Synaptics SL1680 processor. Ideal for a wide range of applications—including smart safety/security cameras, smart home monitoring, and industrial inspection systems—this software delivers cutting-edge functionality without the need for additional cloud connectivity.
The software transforms the SL1680 into a comprehensive smart camera system with support for live video streaming, AI-driven analytics, video storage, playback, real-time event detection, and statistics. The solution is designed for easy deployment and management, providing a user-friendly dashboard that simplifies configuration, management, and provides access to powerful analysis tools. The software includes:
-
Flexible Video Sources: Support for USB cameras, MIPI cameras, RTSP streams, and video files.
-
On-Board AI Analytics: Includes person tracking, re-identification, boundary crossing, occupancy, loitering, and motionless behavior detection.
-
Intuitive Dashboard: Easily configure and manage your camera system with a comprehensive dashboard, featuring tools for real-time monitoring, analytics setup, and video playback.
-
Test Videos: Includes built-in demo videos for easy evaluation and system setup.
-
Customization: Tailor the software to your specific needs with Arcturus support and custom solutions.
The Brinq Smart Camera Software leverages the powerful Synaptics SL1680 processor to bring AI-driven insights directly to your camera system—without cloud reliance—ensuring faster response times, improved privacy, and more efficient video processing.
A video demo from Embedded World 2025 illustrates key features and operation.
- Live view
- Playback view
- Statistics view
- Edge AI analytics (tracking/reidentification, boundary crossing, occupancy, loitering and motionless behavior detection).
- Configurable Regions of Interest (RoI)
- Configurable analytics schedules
- Event timeline
- Searchable event database
- Video event clips
- Configurable video sources
- Circular local video storage
- Standalone operation / management (local USB/HDMI)
- Web based operation / management (PC/network connection)
- Comprehensive dashboard
The demo consists of:
- Hardware – Synaptics SL1680 Astra platform running Arcturus software
- Analytics – Arcturus machine vision software running edge AI models, algorithms and vision processing.
- Smart Camera Software – Arcturus full-stack system with front-end visualization, user experience, configuration; back-end databases, video storage and event handling.
NOTE: A method to connect to the SL1680 terminal is required to load the demo. Once loaded, the demo can be accessed by directly connecting a USB keyboard/mouse and HDMI monitor, or over a network connection using a PC with Chrome web browser.
NOTE: We highly recommend this demo be run using wired Ethernet connection between SL1680 Astra development kit and PC. Using Wi-Fi may cause synchronization, latency or jitter issues.
- Your device must be running astra-synaptics OS v2.0.0 OOBE which includes docker and docker-compose support. The image can be found here.
- If you are running an older version (e.g. v1.7.0) the instructions for flashing the image onto your board can be found here. Instruction may vary depending on the current OS version you are running, make sure you are referring to the instructions that correspond to your Linux OS release version. You can identify the OS version by using the Linux console on the device and entering the following command:
cat /etc/astra_version
- For versions other than v1.7.0 refer to the Synaptics ASTRA SDK -> Astra Yocto Linux User Guide -> "Updating the Firmware" section of the version you are currently running. Documentation for each release is available by selecting the version from the dropdown box in the left column footer of the Synaptics ASTRA SDK website.
Using the Linux console on the device enter the following commands.
curl https://gist.githubusercontent.com/johnlewczuk/f369ad2dbe5d229dbf025beded0a7b30/raw/install_brinq_sl1680-any.sh -o install_brinq_sl1680.sh
sh install_brinq_sl1680.sh 2.0-alpha
NOTE: The device will require internet access.
- Your device must be running the Grinn AstraSOM-1680 base OS version with docker and docker-compose support. The recommended image can be found here.
- If you are running an older version (e.g. v1.4.0) the instructions for flashing the image onto your board can be found here. Instruction may vary depending on the current OS version you are running, make sure you are referring to the instructions that correspond to your Linux OS release version. You can identify the OS version by using the Linux console on the device and entering the following command:
cat /etc/astra_version
- For versions other than v1.4.0 use the link https://synaptics-astra.github.io/doc/v/VERSION/linux/index.html#updating-the-firmware where VERSION is the OS version you are currently running.
Using the Linux console on the device enter the following commands.
curl -fsSL https://gist.githubusercontent.com/johnlewczuk/d3ff5b65fb646c0c1b0e035114c384ac/raw/ -o install_brinq_sl1680_grinn.sh
sh install_brinq_sl1680_grinn.sh
NOTE: The device will require internet access.
Once you have installed the software by following the installation instructions the device will boot and begin to set up the demo. After the system boots and completes installation, you can access the Brinq Smart Camera Software Demo dashboard.
The Smart Camera software dashboard can be accessed in stand-alone mode using a local USB keyboard / mouse and HDMI monitor connected directly to the SL1680 -or- in streaming mode, where a remote PC with a Chrome browser connects to the .local or IP address using a local network connection. Both modes are supported concurrently.
- Stand-alone mode requires the connection of a USB keyboard/mouse and HDMI monitor directly to the SL1680 hardware. It provides a simpler method of demonstration by minimizing the dependency on an external PC. However, it uses significantly more resource as the full visualization and Chromium browser support along with webRTC player/video is provided by the SL1680, in addition to all system and ML processing.
-
Streaming Mode allows a remote PC with a Chrome browser to connect to the device over a local network by using the SL1680’s IP address or .local address. Streaming mode tends to provide a better user experience with a “snappier” interface and smoother video streaming than stand alone mode as the SL1680 is not required to run a local Chromium browser. This performance will vary depending on external network connection etc.
-
The web dashboard can be accessed using the device IP address e.g. http://ip-address-of-device -or- the device's .local address e.g. sl1680.local
NOTE: For this purpose of the demo the connection is insecure – ensure the browser connects using http.
NOTE: logging into the web dashboard for the first time will take several seconds.
Live View is used for real-time monitoring and supervision tasks. The multichannel live view provides visualization of all four video feeds with overlays for detection (bounding boxes, class and confidence) along with tracking ID (as configured). Live View can also overlay special regions of interests and other data (such as count/occupancy) as configured in the visualization settings.
The visualization settings icon located in the lower left corner allows for the selection of different overlays and elements to be presented in Live View.
Analytics are used to detect events in the live video stream. The left column in the Live view displays the list of analytics currently running. New events are displayed as a red badge on each of the analytics. This badge counts the number of new events that have occurred since last time events were acknowledged. By clicking on any of the analytics a drop down list of the events is displayed. The action of clicking an event will open the playback view and jump to the event in the video timeline. Events are stored in a relational database, aligned with video and presented for analysis in the event timeline or event viewer in the Playback View. Clicking on the event badge will open Playback View, present the annotated timeline, recorded video, controls and other tools. For further system integration, an API can be used to access real-time event notifications.
The analytic substream provides a method to visualize an analytic even in real-time. It does this by digitally zooming and cropping the video frame to focus attention on the specific event as it's detected. The output is then presented as a separate video substream on a dedicated port or can be enabled for viewing in the Live mode panel as a picture-in-picture feature. The substream is triggered when the analytic has identified an event and ends when the event concludes.
To use the Analytic Substream, use the Settings menu, General -> Substream configuration option to select the analytic the substream will use (by default it will use Loitering).
To view the Analytic Substream as a picture-in-picture (PiP) on the live view, select it from the visualization menu. The PiP presentation can be scaled or dragged around the live video window.
To view the analytics substream as dedicated substream use my-device.local:8889/ss for example:
http://sl1680.local:8889/ss
Playback View is used to review events, analyze video footage and perform compliance tasks. Playback View is accessed by clicking on the PLAYBACK tab or clicking on an event.
Playback View presents a video event timeline with annotations generated by the AI analytics. Events can be single / atomic or start and stop in the case of an event that lasts for a period of time. Clicking on a specific event in the timeline makes it possible to review the video footage associated with the event.
Video playback controls are provided including playback speed selection. Video can be scrubbed by dragging it left and right of the playback marker on the timeline. Timeline zooming is possible using the mouse scroll wheel function. A filter is provided to visualize specific analytics and a bookmark tool can be used to set location references. Right-clicking an event on the timeline will open the event clips dialog box to save an event clip.
The Event and Clips drawer provides a list view of events with search / filter options. The drawer can be opened by clicking on the drawer open arrow in the top-right of the playback view.
ProTip: Click on the Right Arrow above the incident notifications to expand the Incident Notification view and display a Picture-in-Picture (PiP) live view.
Clicking on a specific event in the timeline makes it possible to review the video footage associated with the event.
Video playback controls are provided including playback speed selection. Video can be scrubbed by dragging it left and right of the playback marker on the timeline. Timeline zooming is possible using the mouse scroll wheel function. A filter is provided to visualize specific analytics and a bookmark tool can be used to set location references.
Right-clicking an event provides options to save it as an event clip. Event clips can be reviewed by opening the Event and Clips drawer, selecting the clips tab and managing, playing or exporting the video clip.
The database of events can be filtered or searched by opening the Events and Clips drawer in the top right of the playback view.
Video is stored locally using on-board storage and compressed as an H.264 video stream. The amount of stored video will depend on the storage available. A circular storage buffer is used to manage storage by overwriting the oldest data once storage is full.
Video Event Clips makes it possible to save, review and export video events for future reference or compliance. To add a clip, right-click on the event in the Playback View timeline to open the Clips dialog box. Event clips feature a configurable duration to specify the video pre-roll, (video prior to event) and post-roll (video duration after the event occurrence), up to maximum of 10 seconds.
Video Clips can be viewed by opening the Events and Playback drawer on the top right of the playback view.

The Statistics view provides a method to visualize events and system performance over time. Time/ date range and refresh rate can be configured by using the Grafana tool bar settings in the top-right corner. Statistics data is provided for both device performance and analytics events.
Configuration / Status is used to enable analytics, setup features and view the status of software components. It can be accessed by using the gear icon on the top right corner of the dashboard.
- General settings allow for the configuration of device specific parameters such as Device Name and Location. It also provides status information about the operation of the devices and provides a method to configure the position of the bounding box labels in the visualization.
- Video settings allow for the creation and selection of videos sources. Test videos and examples are provided.
- Analytics configuration supports establishing Regions of Interest (RoI) by drawing polygons in areas where detection will occur or be masked, creating schedule-based automations and any feature configuration specific to the analytic.
The software includes AI enabled vision analytics for object detection, tracking/reidentification, boundary crossing, occupancy, loitering and motionless behavior detection. These analytics provide powerful tools to help identify events that occur in live video.
An additional getting started tutorial and a video demo has been created to illustrate features and operation using test source video should be reviewed first
This is demo software and is limited in its features and capabilities. Additional analytics for people, vehicles, packages and inspection are available from Arcturus along with development services for specialized analytics and support. Refer to the contact section to reach out to us.
The demo uses a light-weight Yolov8 model that performs efficiently on the SL1680 NPU. The model is trained on the open source COCO dataset limited to people class only to maximize accuracy. The model is designed to achieve real-time performance with accuracy and generalization sufficient for most applications. Additional models are available from Arcturus including a proprietary people detection model trained on a custom curated dataset sourced from multiple validated public sources. This model improves accuracy and generally performs better with no additional performance sacrifice - contact Arcturus for additional information.
Refer also to the Advanced section for additional settings.
Core to many of the analytics is a tracking and reidentification capability. This capability acts as a fundamental primitive that underpins many analytics and complements detection by assigning a unique identity to each person as they enter the field of view. As a person moves around the field of view a motion prediction tracking model is used to reassign the same identity to each bounding box, making it possible to reidentify the same person frame-over-frame. Motion prediction is light-weight and a highly effective method of tracking and reidentification; however, it is dependent on continuous detections. If detections are lost (beyond a tolerance period) either due to occlusion, inference or the person exiting the field of view, a new ID will be assigned to the person when they re-appear.
Additional tracking and reidentification methods are available from Arcturus which include the use of an additional visual appearance metric to reassociate identities based on appearance, should detections be lost. This makes it possible to assign the same identity to a person who has left and re-entered the field of view or who may have become occluded due to a person or object passing in front of them. This form of re-id is more computationally intensive, but provides improved tolerance to dynamic, busy scenes and the basis for other multicamera and long-term tracking methods.
Tracking visualization is presented as part the bounding box notation. This is represented as unique track ID, (% confidence) of the object detection model.
Arcturus is a global leader in tracking and reidentification technology as ranked by the Multiple Object Tracking Benchmark Challenge).
The Boundary Crossing analytic makes it possible to detect when a person has entered or exited a particular region or zone -much like the behavior of a trip wire. This analytic has many general purpose uses including:
- Securing areas, where people should not be
- Monitoring entrances and exits
- Traffic pattern analysis / e.g. determine people traffic per minute
- The boundary crossing analytic relies on the tracking and reidentification feature to assign each person in the field of view with a unique identity number and track them frame-over-frame.
- Boundary crossing uses foot-fall logic to determine an event trigger. This implementation is consistent with a "trip wire" use case example. Specifically, a boundary crossing event is triggered by the center of the base of the bounding box passing over the zone boundary threshold. Debounce on enter/leave events requires 15 successful frames of either entering or leaving before triggering the event and sending a notification.
- The analytic relies on configuring zones and schedules to define operation.
- Boundary crossing also feeds data to the Statistics view including the People Traffic per Minute
NOTE: In some cases where a person may be partially occluded and close up, they may visually appear to exist in a zone, however, the center point of their bounding box may still reside outside the zone and thus not trip a detection. Depending on use case required the methodology can be changed to IoU or Centroid. Contact Arcturus for support.
The Boundary Crossing analytic relies on configuring zones to define operation. To configure the boundary crossing analytic, open the settings panel by clicking on the gear icon in the top right corner of the web dashboard. Then navigate to the BOUNDARY CROSSING tab.

To configure a zone click on the ADD ZONE button to open the zone configuration panel.

To create a zone:
- Name the zone
- Select the color for the zone from the color palette or using a hex value
- Click on the ADD POLYGON icon to add a polygon to the image field
- Move the polygon points to the desired location
- Select desired event notifications (zone-enter or zone-leave)
- Click on the ADD ITEM button to create zone
ProTip:
- To edit a polygon, click on it
- To move a selected polygon, left click and drag
- To delete a selected polygon, press backspace
- To move a point of the selected polygon, left click and drag the point
- To add a point to a selected polygon, right click the desired location on the polygon frame
- To delete a point from the polygon, right click on the point
- To invert the selected region, check the Invert Regions check box
A tool is available to build automation schedules of when incident notifications are sent to the dashboard.
To create a schedule:
- Turn-on scheduling by using the toggle
- click on the calendar schedule to open the clock / calendar selector
- provide a name for the schedule
- click on the Start Time to define the schedule start time
- click on the End Time to define the schedule end time
- Click on the days of the week to add them to the selected days
- Click on ADD ITEM to create schedule
The following event notifications are provided:
Zone-Enter Event
- Is triggered when an object appears in a zone.
- Is triggered when the center point of the bottom of an object bounding box passes across the boundary of a configured zone.
Zone-Leave Event
- Is triggered when a object no longer appears in a zone
- Is triggered when the center point of the bottom of an object bounding box passes across the boundary of a configured zone.
The STATISTICS view contains several charts to help visualize event activity. Time periods of charts can be adjusted by using the time range selector in the top right corner of the statistics view. The following statistics relate to the Boundary Crossing analytic:
-
Zone Incidents
-
People Traffic Per Minute
-
Incident Averages
The occupancy analytic uses detection to determine the number of people in a specific zone. In addition it provides events based on min/max occupancy capacity characteristics. This is useful to determine:
- Sudden appearance of a person (e.g. intrusion)
- Sudden appearance of a crowd (e.g. event or evacuation)
- Vacancy or abandonment (e.g. leaving a post or position)
- Queue depth (e.g. how many people are in a line or queue)
- Measurement of capacity (e.g. number of people in a space)
-
The occupancy analytic relies on counting people in zones. It displays the output of each zone in real-time as an overlay in the top-left corner of the live video feed. This visualization appears only when occupancy is detected.
-
By configuring a zone, and defining the min/max occupancy characteristics, the Occupancy analytic will report events that exceed the minimum or maximum value specified.
-
The analytic relies on configuring zones and schedules to define operation.
The following event notifications are provided:
Alarm when Above
- Trigger an event based on exceeding maximum capacity threshold in the configured zone.
Alarm when Below
- Trigger an event based on exceeding minimum capacity threshold in the configured zone.
The Occupancy analytic relies on configuring zones to define operation along with threshold values. To configure the occupancy analytic open the settings panel by clicking on the gear icon in the top right corner of the web dashboard. Then navigate to the OCCUPANCY tab.

To configure a zone click on the ADD ZONE button to open the zone configuration panel.

To create a zone:
- Name the zone
- Select the color for the zone from the color palette or using a hex value
- Click on the ADD POLYGON icon to add a polygon to the image field
- Move the polygon points to the desired location
- Select desired event notifications (Alarm when Above or Alarm when Below)
- Click on the ADD ITEM button to create zone
ProTip:
- To edit a polygon, click on it
- To move a selected polygon, left click and drag
- To delete a selected polygon, press backspace
- To move a point of the selected polygon, left click and drag the point
- To add a point to a selected polygon, right click the desired location on the polygon frame
- To delete a point from the polygon, right click on the point
- To invert the selected region, check the Invert Regions check box
A tool is available to build automation schedules of when incident notifications are sent to the dashboard.
To create a schedule:
- Turn-on scheduling by using the toggle
- click on the calendar schedule to open the clock / calendar selector
- provide a name for the schedule
- click on the Start Time to define the schedule start time
- click on the End Time to define the schedule end time
- Click on the days of the week to add them to the selected days
- Click on ADD ITEM to create schedule
The people counting feature provides a tally of the people in each zone. This feature displays the output of each zone in real-time as an overlay in the top-left corner of the live video feed. This visualization appears only when occupancy is detected.
The following event notifications are provided:
Occupancy Exceeds Event
- Is triggered when the total number of people in the field of view exceeds the threshold value
Occupancy Drops Below Event
- Is triggered when the total number of people in the field of view drops below the threshold value
The STATISTICS view contains several charts to help visualize event activity. Time periods of charts can be adjusted by using the time range selector in the top right corner of the statistics view. The following statistics relate to the Occupancy analytic:
- Incident Averages
The loitering analytic makes it possible to determine when the same person has remained in a zone for an extended period of time. This is a useful analytic for:
- Monitoring areas such as exit doors
- Monitoring areas prone to suspicious activity
The loitering analytic relies on the tracking and reidentification feature to assign each person in the field of view with a unique identity number and track them frame-over-frame. Once a person is detected, a timer is started to determine how long the person has remained in the field of view. A timer threshold value determines when the person is considered to be loitering and triggers an incident notification. The timer threshold value is configurable using the analytics settings. The analytic relies on configuring zones and schedules to define operation.
The Loitering analytic relies on configuring zones to define operation along with a threshold value. These parameters are configured using the settings panel by clicking on the gear icon in the top right corner of the dashboard and selecting the LOITERING tab.

To configure a zone click on the ADD ZONE button to open the zone configuration panel.

To create a zone:
- Name the zone
- Select the color for the zone from the color palette or using a hex value
- Click on the ADD POLYGON icon to add a polygon to the image field
- Move the polygon points to the desired location
- Select desired event notifications (loitering-start or loitering-stop)
- Click on the ADD ITEM button to create zone
ProTip:
- To edit a polygon, click on it
- To move a selected polygon, left click and drag
- To delete a selected polygon, press backspace
- To move a point of the selected polygon, left click and drag the point
- To add a point to a selected polygon, right click the desired location on the polygon frame
- To delete a point from the polygon, right click on the point
- To invert the selected region, check the Invert Regions check box
A tool is available to build automation schedules of when incident notifications are sent to the dashboard.
To create a schedule:
- Turn-on scheduling by using the toggle
- click on the calendar schedule to open the clock / calendar selector
- provide a name for the schedule
- click on the Start Time to define the schedule start time
- click on the End Time to define the schedule end time
- Click on the days of the week to add them to the selected days
- Click on ADD ITEM to create schedule
- Loitering Threshold
- The loitering threshold setting is a timer in mS that determines the length of time a person must be tracked before they are considered to be loitering.
Unlike other single event incidents events a loitering event has a defined start and end period. This creates a traceable event over time that is more useful from analysis perspective than just a single event. This is visualized in the playback timeline as a continuous bar from the start to the end periods. The following event notifications are provided.
Loitering-detected Event
- Is triggered when a person remains inside a loitering zone for longer than the threshold value.
Loitering-ends Event
- Is triggered when a person exits a loitering zone or is no longer detected.
The STATISTICS view contains a chart to help visualize event activity. Time periods of charts can be adjusted by using the time range selector in the top right corner of the statistics view. The following statistics relate to the Loitering analytic:

The motionless analytic makes it possible to determine when a person who has been moving has stopped moving. This is useful analytic for:
- Detecting if someone is trying to obfuscate themselves
- Detecting if someone has fallen asleep or is otherwise unconscious
The motionless analytic relies on the tracking and reidentification feature to assign each person in the field of view with a unique identity number and track them frame-over-frame. Once a person is detected, the motionless analytic stores the localization information. This initialized localization is calculated by creating a mean value derived from a buffer of multiple detections. Periodically the current localization information is compared against the initialized localization value using Euclidian distance. A threshold value is used to determine the maximum amount of motion tolerated before an event is triggered.
The motionless analytic relies on configuring zones to define operation along with a threshold values. These parameters are configured using the settings panel by clicking on the gear icon in the top right corner of the dashboard and selecting the MOTIONLESS tab.

To configure a zone click on the ADD ZONE button to open the zone configuration panel.

To create a zone:
- Name the zone
- Select the color for the zone from the color palette or using a hex value
- Click on the ADD POLYGON icon to add a polygon to the image field
- Move the polygon points to the desired location
- Select desired event notifications (motionless-start, motionless-stop)
- Click on the ADD ITEM button to create zone
ProTip:
- To edit a polygon, click on it
- To move a selected polygon, left click and drag
- To delete a selected polygon, press backspace
- To move a point of the selected polygon, left click and drag the point
- To add a point to a selected polygon, right click the desired location on the polygon frame
- To delete a point from the polygon, right click on the point
- To invert the selected region, check the Invert Regions check box
A tool is available to build automation schedules of when incident notifications are sent to the dashboard.
To create a schedule:
- Turn-on scheduling by using the toggle
- click on the calendar schedule to open the clock / calendar selector
- provide a name for the schedule
- click on the Start Time to define the schedule start time
- click on the End Time to define the schedule end time
- Click on the days of the week to add them to the selected days
- Click on ADD ITEM to create schedule
- Initial Tracks
- Defines the number of initial tracks stored until the analytic begins.
- Maximum Buffer Size
- Maximum buffer size of stored points to calculate mean.
- Sample Size
- Sample windows of tracklets to derive direction vector.
- Alarm Delay (off)
- Minimum number of mS before motionless event can be triggered again.
- Maximum Misses
- Maximum number of consecutive misses until an object is classified as moving.
- Maximum Euclidian Distance
- Maximum distance in pixels that an object can move to still be classified as motionless.
- Alarm Delay
- Minimum number of mS an object must remain motionless before triggering an event.
Motionless Person
- Is triggered when a person remains in a motionless state, inside a motionless zone, for longer than the threshold values.
End of Motionless Period
- Is triggered when a person stops being motionless or is no longer detected in a motionless zone.
The STATISTICS view contains a chart to help visualize event activity. Time periods of charts can be adjusted by using the time range selector in the top right corner of the statistics view. The following statistics relate to the Motionless analytic:
The VIDEO settings provide a method to change the input video source into the analytics system. Video sources can include local V4L2 camera sources (such as a MIPI-CSI or USB camera), an RTSP camera stream from a remote IP camera or a local video file using a compatible video format. The video source can be configured using the settings panel by clicking on the gear icon in the top right corner of the dashboard and selecting the VIDEO tab.
The video source settings table contains the following
- Active - Current video source in use
- Name, Location, Source Type - Configurable parameters used to note the details of the video source
- Source - Gstreamer pipeline detail required to specify the video data source, format, size and any required conversion options
NOTE: By default, zones are erased when a video source is changed. This occurs to avoid any unexpected events when reconfiguring the video source to a format, video or perspective that is significantly different from the prior source. To disable this, deselect the Erase Zones When Video Source is Changed option.
The following test sources are provided:
- A webcam test source validated with a Logi C920 USB (UVC compatible) webcam.
- A video file test source to help evaluate analytics. A video demo has been created to illustrate features and operation using this test video.
The Video Sources table displays a list of available video sources. To add a source, click on the (+) button, complete the form and submit by using the ADD ITEM button. Example sources are provided for USB webcam and RTSP source.
If you are unfamiliar with Gstreamer pipelines, examples are provided for file, V4L2 (USB camera) and RTSP streams. Synaptics has provided comprehensive documentation on how to configure various camera inputs. Refer to the Astra Yocto Linux User Guide for additional Gstreamer pipeline examples for MIPI, USB(UVC) and RTSP camera sources.
The Upload Local Video File tool uploads a video file to the device's local storage under the platform file:///src/streamproc/data/videos/ directory. To upload a file, click the UPLOAD LOCAL VIDEO FILE button, select the file you want to upload from your local machine. The action of selecting the file to upload will prompt the Add Video Source dialog box which contains the explicit path of the video in the device. Complete the Add Video Source dialog box information and submit by pressing ADD ITEM, then, APPLY CHANGES. The software will automatically append additional parameters to the source information to maintain playback compatibility. A video file must be 640x360 encoded using H.264 or mJPEG format and saved as an .mp4 file.
Once a video is uploaded it will need to be selected as the Active Video Source. Uploaded test videos will playback in a loop by default.
A Minimum Confidence Threshold setting is used to determine the minimum percentage of object detection confidence required before an object is determined. A detection with lower confidence than the threshold will not be display a bounding box and will not be identified or tracked by the system. This setting applies globally to all detections. It is configurable by opening the settings
The Minimum Confidence Threshold can be configured using the settings panel by clicking on the gear icon in the top right corner of the dashboard and selecting the ADVANCED tab.
- Refer to the following link for additional uninstall, factory reset and troubleshooting instruction.
Copyright 2025 - Arcturus Networks Inc. All Rights Reserved.
Demo License and Terms
For licensing, support and customization
- contact Arcturus:
- Toll free: (US/Canada): +1 866.733.8647 or +1 416.621.0125
- Contact Arcturus
- Website: www.arcturusnetworks.com
- We don't bite.















