HassOS users must manage their configuration from the Home Assistant file editor by creating a file in the Home Assistant config directory called frigate.yml
. For other installations, the default location for the config file is /config/config.yml
. This can be overridden with the CONFIG_FILE
environment variable. Camera specific ffmpeg parameters are documented here.
NOTE: Environment variables that begin with
FRIGATE_
may be referenced in{}
. egpassword: '{FRIGATE_MQTT_PASSWORD}
The configuration accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
cameras |
object |
see cameras | ✅ | |
clips |
object |
see clips | {} |
|
database |
object |
see database | {} |
|
detect |
object |
see detect | {} |
|
detectors |
object |
see detectors | {"coral":{"device":"usb","type":"edgetpu"}} |
✅ |
environment_vars |
object |
see environment_vars | {} |
|
ffmpeg |
object |
see ffmpeg | ||
logger |
object |
see logger | ||
model |
object |
see model | {} |
|
motion |
object |
see motion | {} |
|
mqtt |
object |
see mqtt | {} |
✅ |
objects |
object |
see objects | {} |
|
record |
object |
see record | {} |
|
snapshots |
object |
see snapshots | {} |
detectors:
coral:
device: usb
type: edgetpu
cameras:
back:
ffmpeg:
inputs:
- path: >-
rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
roles:
- detect
- rtmp
fps: 5
height: 720
width: 1280
mqtt:
host: mqtt.server.com
Each of your cameras must be configured. The following is the minimum required to register a camera in Frigate.
cameras:
back:
ffmpeg:
inputs:
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=1
roles:
- detect
- rtmp
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=0
roles:
- clips
- record
width: 1280
height: 720
fps: 5
The cameras accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
best_image_timeout |
integer |
Best image timeout | 60 |
|
clips |
object |
see clips | {} |
|
ffmpeg |
object |
see ffmpeg | ✅ | |
fps |
integer |
|||
height |
integer |
✅ | ||
motion |
object |
see motion | {} |
|
mqtt |
object |
see mqtt | {} |
|
objects |
object |
see objects | {} |
|
record |
object |
see record | {} |
|
rtmp |
object |
see rtmp | {} |
|
snapshots |
object |
see snapshots | {} |
|
width |
integer |
✅ | ||
zones |
object |
see zones | {} |
Overrides and camera-specific settings for saving clips. For more information about clips, see Clips
The individual camera config accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
enabled |
boolean |
enables clips for the camera. This value can be set via MQTT and will be updated in startup based on retained value | true |
✅ |
objects |
Array<string> |
Objects | ["person"] |
|
post_capture |
integer |
Post capture time | 5 |
|
pre_capture |
integer |
Pre capture time | 5 |
|
required_zones |
Array<string> |
Required zones | [] |
|
retain |
object |
see retain | {} |
Camera override for retention settings (default: global values)
The clips accepts the following properties
property | type | description | default |
---|---|---|---|
default |
number |
Number of days to keep events in the database | 10 |
objects |
object |
see objects |
retain:
default: 10
retain:
default: 3
objects:
person: 15
dog: '1'
Configure event retention differently for specific tracked objects
type | description |
---|---|
number |
Number of days |
Up to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.
Configure FFMPEG process arguments
The configuration accepts the following properties
property | type | description | default |
---|---|---|---|
global_args |
one of string , orArray<string> |
Global arguments | "-hide_banner -loglevel warning" |
hwaccel_args |
one of string , orArray<string> |
Hardware acceleration args | |
input_args |
one of string , orArray<string> |
Input arguments | "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1" |
output_args |
object |
see output_args |
Arguments to apply by default to all processes
Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration
FFMPEG arguments for the ffmpeg process for each video role
The ffmpeg arguments accepts the following properties
property | type | description | default |
---|---|---|---|
clips |
one of string , orArray<string> |
Output args for clip streams | "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
detect |
one of string , orArray<string> |
Output args for detect streams | "-f rawvideo -pix_fmt yuv420p" |
record |
one of string , orArray<string> |
Output args for record streams | "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
rtmp |
one of string , orArray<string> |
Output args for rtmp streams | "-c copy -f flv" |
Configure FFMPEG process arguments
The camera ffmpeg inputs accepts the following properties
property | type | description | default |
---|---|---|---|
global_args |
one of string , orArray<string> |
Global arguments | "-hide_banner -loglevel warning" |
hwaccel_args |
one of string , orArray<string> |
Hardware acceleration args | |
input_args |
one of string , orArray<string> |
Input arguments | "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1" |
output_args |
object |
see output_args |
Arguments to apply by default to all processes
Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration
FFMPEG arguments for the ffmpeg process for each video role
The ffmpeg arguments accepts the following properties
property | type | description | default |
---|---|---|---|
clips |
one of string , orArray<string> |
Output args for clip streams | "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
detect |
one of string , orArray<string> |
Output args for detect streams | "-f rawvideo -pix_fmt yuv420p" |
record |
one of string , orArray<string> |
Output args for record streams | "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
rtmp |
one of string , orArray<string> |
Output args for rtmp streams | "-c copy -f flv" |
Camera level motion configuration
The individual camera config accepts the following properties
property | type | description |
---|---|---|
mask |
one of string , orArray<string> |
Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with Motion Boxes enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with Motion Boxes enabled again. |
motion:
mask: 0,900,1080,900,1080,1920,0,1920
motion:
mask:
- 0,0,0,100,100,100,100,0
- 900,80,900,150,820,150,820,80
The individual camera config accepts the following properties
property | type | description | default |
---|---|---|---|
bounding_box |
boolean |
Bounding box | true |
crop |
boolean |
Crop | true |
enabled |
boolean |
Enabled | true |
height |
integer |
Height | 270 |
required_zones |
Array<string> |
Required zones | [] |
timestamp |
boolean |
Timestamp | true |
The individual camera config accepts the following properties
property | type | description | default |
---|---|---|---|
filters |
object |
see filters | {} |
mask |
string |
||
track |
Array<string> |
["person"] |
The configuration accepts the following properties
property | type | description | default |
---|---|---|---|
max_area |
integer |
maximum width * height of the bounding box for the detected object |
24000000 |
min_area |
integer |
minimum width * height of the bounding box for the detected object |
|
min_score |
number |
minimum score for the object to initiate tracking | 0.5 |
threshold |
number |
minimum decimal percentage for tracked object's computed score to be considered a true positive | 0.7 |
filters:
dog:
max_area: 20000
threshold: 0.8
person:
min_area: 40000
The individual camera config accepts the following properties
property | type | description | default |
---|---|---|---|
enabled |
boolean |
||
retain_days |
integer |
30 |
The individual camera config accepts the following properties
property | type | description | default |
---|---|---|---|
enabled |
boolean |
true |
The snapshots accepts the following properties
property | type | description | default |
---|---|---|---|
bounding_box |
boolean |
Bounding box | true |
crop |
boolean |
Crop | true |
enabled |
boolean |
Enabled | true |
height |
integer |
Height | 270 |
required_zones |
Array<string> |
Required zones | [] |
timestamp |
boolean |
Timestamp | true |
The snapshots accepts the following properties
property | type | description | default |
---|---|---|---|
default |
number |
Number of days to keep events in the database | 10 |
objects |
object |
see objects |
retain:
default: 10
retain:
default: 3
objects:
person: 15
dog: '1'
Configure event retention differently for specific tracked objects
type | description |
---|---|
number |
Number of days |
Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
During testing, draw_zones should be set in the config to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow the same steps above for a "Motion mask", but use the section of the web UI for creating a zone instead.
Frigate can save video clips without any CPU overhead for encoding by simply copying the stream directly with FFmpeg. It leverages FFmpeg's segment functionality to maintain a cache of video for each camera. The cache files are written to disk at /tmp/cache
and do not introduce memory overhead. When an object is being tracked, it will extend the cache to ensure it can assemble a clip when the event ends. Once the event ends, it again uses FFmpeg to assemble a clip by combining the video clips without any encoding by the CPU. Assembled clips are are saved to /media/frigate/clips
. Clips are retained according to the retention settings defined on the config for each object type.
These clips will not be playable in the web UI or in HomeAssistant's media browser unless your camera sends video as h264.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
max_seconds |
integer |
Maximum length of time to retain video during long events. If an object is being tracked for longer than this amount of time, the cache will begin to expire and the resulting clip will be the last x seconds of the event. | 300 |
retain |
object |
see retain | {} |
tmpfs_cache_size |
string |
Size of tmpfs mount to create for cache files, eg mount -t tmpfs -o size={tmpfs_cache_size} tmpfs /tmp/cache . |
Addon users must have Protection mode disabled for the addon when using this setting.
Also, if you have mounted a tmpfs volume through docker, this value should not be set in your config. | |
The clips accepts the following properties
property | type | description | default |
---|---|---|---|
default |
number |
Number of days to keep events in the database | 10 |
objects |
object |
see objects |
retain:
default: 10
retain:
default: 3
objects:
person: 15
dog: '1'
Configure event retention differently for specific tracked objects
type | description |
---|---|
number |
Number of days |
Event and clip information is managed in a sqlite database at /media/frigate/clips/frigate.db
. If that database is deleted, clips will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within HomeAssistant.
If you are storing your clips on a network share (SMB, NFS, etc), you may get a database is locked
error message on startup. You can customize the location of the database in the config if necessary.
This may need to be in a custom location if network storage is used for clips.
The frigate yaml configuration accepts the following properties
property | type | description |
---|---|---|
path |
string |
Database path |
Global object detection settings. These may also be defined at the camera level.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
max_disappeared |
integer |
Max frames for object gone | 25 |
The default config will look for a USB Coral device. If you do not have a Coral, you will need to configure a CPU detector. If you have PCI or multiple Coral devices, you need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
Frigate supports edgetpu and cpu as detector types. The device value should be specified according to the Documentation for the TensorFlow Lite Python API.
Note: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
detectors:
coral:
device: usb
type: edgetpu
detectors:
coral1:
device: usb:0
type: edgetpu
coral2:
device: usb:0
type: edgetpu
The detectors accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
device |
string |
Device name as accepted via the TensorFlow API | "usb" |
|
num_threads |
number |
num_threads value passed to the tflite.Interpreter. _Note: this value is only used for CPU types | 3 |
|
type |
one of 'edgetpu' , or'cpu' |
Hardware detection device type | "edgetpu" |
✅ |
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within Hass.io).
IMPORTANT: You will need to restart Frigate for any changes to take effect
type | description |
---|---|
string |
ENV var value |
environment_vars:
EXAMPLE_VAR: value
Configure FFMPEG process arguments
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
global_args |
one of string , orArray<string> |
Global arguments | "-hide_banner -loglevel warning" |
hwaccel_args |
one of string , orArray<string> |
Hardware acceleration args | |
input_args |
one of string , orArray<string> |
Input arguments | "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1" |
output_args |
object |
see output_args |
Arguments to apply by default to all processes
Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration
FFMPEG arguments for the ffmpeg process for each video role
The ffmpeg arguments accepts the following properties
property | type | description | default |
---|---|---|---|
clips |
one of string , orArray<string> |
Output args for clip streams | "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
detect |
one of string , orArray<string> |
Output args for detect streams | "-f rawvideo -pix_fmt yuv420p" |
record |
one of string , orArray<string> |
Output args for record streams | "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an" |
rtmp |
one of string , orArray<string> |
Output args for rtmp streams | "-c copy -f flv" |
Change the default log levels for troubleshooting purposes.
NOTE: all ffmpeg logs are sent as
error
level by default.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
default |
one of 'debug' ,'info' ,'warning' ,'error' , or'critical' |
"info" |
|
logs |
object |
see logs |
logger:
default: warning
logs:
detector.<detector_name>: debug
ffmpeg.<camera_name>.<role>: error
frigate.app: debug
frigate.edgetpu: info
frigate.mqtt: debug
watchdog.<camera_name>: debug
The frigate yaml configuration accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
height |
integer |
320 |
✅ | |
width |
integer |
320 |
✅ |
Advanced configuration to change the sensitivity of motion detection.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
contour_area |
integer |
Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects. | 100 |
delta_alpha |
number |
Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames. Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion. Too low and a fast moving person wont be detected as motion. | 0.2 |
frame_alpha |
number |
Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background. Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster. Low values will cause things like moving shadows to be detected as motion for longer. More info. | 0.2 |
frame_height |
integer |
Height of the resized motion frame (default: 1/6th of the original frame height). This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion. | 100 |
mask |
one of string , orArray<string> |
||
threshold |
integer |
The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. | 25 |
The frigate yaml configuration accepts the following properties
property | type | description | default | required |
---|---|---|---|---|
client_id |
string |
MQTT client ID – must be unique if you are running multiple instances | "frigate" |
|
host |
string |
MQTT host name or IP address | ✅ | |
password |
string |
|||
port |
number |
MQTT port | 1883 |
|
stats_interval |
number |
Interval in seconds for publishing Frigate internal stats to MQTT. Available at #/<topic_prefix>/stats |
60 |
|
topic_prefix |
string |
MQTT topic prefix – must be unique if you are running multiple instances | "frigate" |
|
user |
string |
mqtt:
host: 10.0.1.123
password: '{FRIGATE_MQTT_PASSWORD}'
user: '{FRIGATE_MQTT_USER}'
Track specific objects and apply filters to each
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
filters |
object |
see filters | {} |
track |
Array<string> |
["person"] |
The configuration accepts the following properties
property | type | description | default |
---|---|---|---|
max_area |
integer |
maximum width * height of the bounding box for the detected object |
24000000 |
min_area |
integer |
minimum width * height of the bounding box for the detected object |
|
min_score |
number |
minimum score for the object to initiate tracking | 0.5 |
threshold |
number |
minimum decimal percentage for tracked object's computed score to be considered a true positive | 0.7 |
filters:
dog:
max_area: 20000
threshold: 0.8
person:
min_area: 40000
24/7 recordings can be enabled and are stored at /media/frigate/recordings. The folder structure for the recordings is YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4. These recordings are written directly from your camera stream without re-encoding and are available in HomeAssistant's media browser. Each camera supports a configurable retention policy in the config.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
enabled |
boolean |
||
retain_days |
integer |
30 |
Frigate can save a snapshot image to /media/frigate/clips
for each event named as <camera>-<id>.jpg
.
The frigate yaml configuration accepts the following properties
property | type | description | default |
---|---|---|---|
enabled |
boolean |
||
retain |
object |
see retain | {} |
snapshots:
enabled: true
retain:
default: 5
objects:
person: 10
dog: 1
car: 5
The snapshots accepts the following properties
property | type | description | default |
---|---|---|---|
default |
number |
Number of days to keep events in the database | 10 |
objects |
object |
see objects |
retain:
default: 10
retain:
default: 3
objects:
person: 15
dog: '1'
Configure event retention differently for specific tracked objects
type | description |
---|---|
number |
Number of days |