Skip to content

Instantly share code, notes, and snippets.

@paularmstrong
Last active March 4, 2021 15:56
Show Gist options
  • Save paularmstrong/d2a7abf76e15bc2a4fbdc549a69fdc94 to your computer and use it in GitHub Desktop.
Save paularmstrong/d2a7abf76e15bc2a4fbdc549a69fdc94 to your computer and use it in GitHub Desktop.
Auto-formatting JSONSchema to a single Markdown documentation file

Frigate yaml configuration

HassOS users must manage their configuration from the Home Assistant file editor by creating a file in the Home Assistant config directory called frigate.yml. For other installations, the default location for the config file is /config/config.yml. This can be overridden with the CONFIG_FILE environment variable. Camera specific ffmpeg parameters are documented here.

NOTE: Environment variables that begin with FRIGATE_ may be referenced in {}. eg password: '{FRIGATE_MQTT_PASSWORD}

The configuration accepts the following properties

property type description default required
cameras object see cameras
clips object see clips {}
database object see database {}
detect object see detect {}
detectors object see detectors {"coral":{"device":"usb","type":"edgetpu"}}
environment_vars object see environment_vars {}
ffmpeg object see ffmpeg
logger object see logger
model object see model {}
motion object see motion {}
mqtt object see mqtt {}
objects object see objects {}
record object see record {}
snapshots object see snapshots {}

Examples

detectors:
  coral:
    device: usb
    type: edgetpu
cameras:
  back:
    ffmpeg:
      inputs:
        - path: >-
            rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
          roles:
            - detect
            - rtmp
    fps: 5
    height: 720
    width: 1280
mqtt:
  host: mqtt.server.com

Cameras

Each of your cameras must be configured. The following is the minimum required to register a camera in Frigate.

Examples

cameras:
  back:
    ffmpeg:
      inputs:
        - path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=1
          roles:
            - detect
            - rtmp
        - path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=0
          roles:
            - clips
            - record
    width: 1280
    height: 720
    fps: 5

Individual camera config

The cameras accepts the following properties

property type description default required
best_image_timeout integer Best image timeout 60
clips object see clips {}
ffmpeg object see ffmpeg
fps integer
height integer
motion object see motion {}
mqtt object see mqtt {}
objects object see objects {}
record object see record {}
rtmp object see rtmp {}
snapshots object see snapshots {}
width integer
zones object see zones {}

Clips

Overrides and camera-specific settings for saving clips. For more information about clips, see Clips

The individual camera config accepts the following properties

property type description default required
enabled boolean enables clips for the camera. This value can be set via MQTT and will be updated in startup based on retained value true
objects Array<string> Objects ["person"]
post_capture integer Post capture time 5
pre_capture integer Pre capture time 5
required_zones Array<string> Required zones []
retain object see retain {}

Event retention

Camera override for retention settings (default: global values)

The clips accepts the following properties

property type description default
default number Number of days to keep events in the database 10
objects object see objects
Examples
retain:
  default: 10
retain:
  default: 3
  objects:
    person: 15
    dog: '1'

Object-specific retention

Configure event retention differently for specific tracked objects

type description
number Number of days

Camera ffmpeg inputs

Up to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.

FFMPEG arguments

Configure FFMPEG process arguments

The configuration accepts the following properties

property type description default
global_args one of string, orArray<string> Global arguments "-hide_banner -loglevel warning"
hwaccel_args one of string, orArray<string> Hardware acceleration args
input_args one of string, orArray<string> Input arguments "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1"
output_args object see output_args

Global arguments

Arguments to apply by default to all processes

Hardware acceleration args

Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration

Input arguments

Output argumements

FFMPEG arguments for the ffmpeg process for each video role

The ffmpeg arguments accepts the following properties

property type description default
clips one of string, orArray<string> Output args for clip streams "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
detect one of string, orArray<string> Output args for detect streams "-f rawvideo -pix_fmt yuv420p"
record one of string, orArray<string> Output args for record streams "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
rtmp one of string, orArray<string> Output args for rtmp streams "-c copy -f flv"

FFMPEG arguments

Configure FFMPEG process arguments

The camera ffmpeg inputs accepts the following properties

property type description default
global_args one of string, orArray<string> Global arguments "-hide_banner -loglevel warning"
hwaccel_args one of string, orArray<string> Hardware acceleration args
input_args one of string, orArray<string> Input arguments "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1"
output_args object see output_args

Global arguments

Arguments to apply by default to all processes

Hardware acceleration args

Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration

Input arguments

Output argumements

FFMPEG arguments for the ffmpeg process for each video role

The ffmpeg arguments accepts the following properties

property type description default
clips one of string, orArray<string> Output args for clip streams "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
detect one of string, orArray<string> Output args for detect streams "-f rawvideo -pix_fmt yuv420p"
record one of string, orArray<string> Output args for record streams "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
rtmp one of string, orArray<string> Output args for rtmp streams "-c copy -f flv"

Motion

Camera level motion configuration

The individual camera config accepts the following properties

property type description
mask one of string, orArray<string> Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with Motion Boxes enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with Motion Boxes enabled again.
Examples
motion:
  mask: 0,900,1080,900,1080,1920,0,1920
motion:
  mask:
    - 0,0,0,100,100,100,100,0
    - 900,80,900,150,820,150,820,80

MQTT

The individual camera config accepts the following properties

property type description default
bounding_box boolean Bounding box true
crop boolean Crop true
enabled boolean Enabled true
height integer Height 270
required_zones Array<string> Required zones []
timestamp boolean Timestamp true

Objects

The individual camera config accepts the following properties

property type description default
filters object see filters {}
mask string
track Array<string> ["person"]

Object filters

Filters

The configuration accepts the following properties

property type description default
max_area integer maximum width * height of the bounding box for the detected object 24000000
min_area integer minimum width * height of the bounding box for the detected object
min_score number minimum score for the object to initiate tracking 0.5
threshold number minimum decimal percentage for tracked object's computed score to be considered a true positive 0.7
Examples
filters:
  dog:
    max_area: 20000
    threshold: 0.8
  person:
    min_area: 40000

Recording

The individual camera config accepts the following properties

property type description default
enabled boolean
retain_days integer 30

RTMP re-stream

The individual camera config accepts the following properties

property type description default
enabled boolean true

Snapshots

Snapshots

The snapshots accepts the following properties

property type description default
bounding_box boolean Bounding box true
crop boolean Crop true
enabled boolean Enabled true
height integer Height 270
required_zones Array<string> Required zones []
timestamp boolean Timestamp true

Event retention

The snapshots accepts the following properties

property type description default
default number Number of days to keep events in the database 10
objects object see objects
Examples
retain:
  default: 10
retain:
  default: 3
  objects:
    person: 15
    dog: '1'

Object-specific retention

Configure event retention differently for specific tracked objects

type description
number Number of days

Zones

Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.

During testing, draw_zones should be set in the config to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.

To create a zone, follow the same steps above for a "Motion mask", but use the section of the web UI for creating a zone instead.

Clips

Frigate can save video clips without any CPU overhead for encoding by simply copying the stream directly with FFmpeg. It leverages FFmpeg's segment functionality to maintain a cache of video for each camera. The cache files are written to disk at /tmp/cache and do not introduce memory overhead. When an object is being tracked, it will extend the cache to ensure it can assemble a clip when the event ends. Once the event ends, it again uses FFmpeg to assemble a clip by combining the video clips without any encoding by the CPU. Assembled clips are are saved to /media/frigate/clips. Clips are retained according to the retention settings defined on the config for each object type.

These clips will not be playable in the web UI or in HomeAssistant's media browser unless your camera sends video as h264.

The frigate yaml configuration accepts the following properties

property type description default
max_seconds integer Maximum length of time to retain video during long events. If an object is being tracked for longer than this amount of time, the cache will begin to expire and the resulting clip will be the last x seconds of the event. 300
retain object see retain {}
tmpfs_cache_size string Size of tmpfs mount to create for cache files, eg mount -t tmpfs -o size={tmpfs_cache_size} tmpfs /tmp/cache.

Addon users must have Protection mode disabled for the addon when using this setting.

Also, if you have mounted a tmpfs volume through docker, this value should not be set in your config. | |

Event retention

The clips accepts the following properties

property type description default
default number Number of days to keep events in the database 10
objects object see objects

Examples

retain:
  default: 10
retain:
  default: 3
  objects:
    person: 15
    dog: '1'

Object-specific retention

Configure event retention differently for specific tracked objects

type description
number Number of days

Database

Event and clip information is managed in a sqlite database at /media/frigate/clips/frigate.db. If that database is deleted, clips will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within HomeAssistant.

If you are storing your clips on a network share (SMB, NFS, etc), you may get a database is locked error message on startup. You can customize the location of the database in the config if necessary.

This may need to be in a custom location if network storage is used for clips.

The frigate yaml configuration accepts the following properties

property type description
path string Database path

Detect

Global object detection settings. These may also be defined at the camera level.

The frigate yaml configuration accepts the following properties

property type description default
max_disappeared integer Max frames for object gone 25

Detectors

The default config will look for a USB Coral device. If you do not have a Coral, you will need to configure a CPU detector. If you have PCI or multiple Coral devices, you need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.

Frigate supports edgetpu and cpu as detector types. The device value should be specified according to the Documentation for the TensorFlow Lite Python API.

Note: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.

Examples

detectors:
  coral:
    device: usb
    type: edgetpu
detectors:
  coral1:
    device: usb:0
    type: edgetpu
  coral2:
    device: usb:0
    type: edgetpu

Detector properties

The detectors accepts the following properties

property type description default required
device string Device name as accepted via the TensorFlow API "usb"
num_threads number num_threads value passed to the tflite.Interpreter. _Note: this value is only used for CPU types 3
type one of 'edgetpu', or'cpu' Hardware detection device type "edgetpu"

Environment variables

This section can be used to set environment variables for those unable to modify the environment of the container (ie. within Hass.io).

IMPORTANT: You will need to restart Frigate for any changes to take effect

type description
string ENV var value

Examples

environment_vars:
  EXAMPLE_VAR: value

FFMPEG arguments

Configure FFMPEG process arguments

The frigate yaml configuration accepts the following properties

property type description default
global_args one of string, orArray<string> Global arguments "-hide_banner -loglevel warning"
hwaccel_args one of string, orArray<string> Hardware acceleration args
input_args one of string, orArray<string> Input arguments "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1"
output_args object see output_args

Global arguments

Arguments to apply by default to all processes

Hardware acceleration args

Hardware acceleration arguments. These are dependent on your system. For more information, see Hardware acceleration

Input arguments

Output argumements

FFMPEG arguments for the ffmpeg process for each video role

The ffmpeg arguments accepts the following properties

property type description default
clips one of string, orArray<string> Output args for clip streams "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
detect one of string, orArray<string> Output args for detect streams "-f rawvideo -pix_fmt yuv420p"
record one of string, orArray<string> Output args for record streams "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
rtmp one of string, orArray<string> Output args for rtmp streams "-c copy -f flv"

Logger

Change the default log levels for troubleshooting purposes.

NOTE: all ffmpeg logs are sent as error level by default.

The frigate yaml configuration accepts the following properties

property type description default
default one of 'debug','info','warning','error', or'critical' "info"
logs object see logs

Examples

logger:
  default: warning
  logs:
    detector.<detector_name>: debug
    ffmpeg.<camera_name>.<role>: error
    frigate.app: debug
    frigate.edgetpu: info
    frigate.mqtt: debug
    watchdog.<camera_name>: debug

Model

The frigate yaml configuration accepts the following properties

property type description default required
height integer 320
width integer 320

Motion

Advanced configuration to change the sensitivity of motion detection.

The frigate yaml configuration accepts the following properties

property type description default
contour_area integer Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects. 100
delta_alpha number Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames. Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion. Too low and a fast moving person wont be detected as motion. 0.2
frame_alpha number Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background. Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster. Low values will cause things like moving shadows to be detected as motion for longer. More info. 0.2
frame_height integer Height of the resized motion frame (default: 1/6th of the original frame height). This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion. 100
mask one of string, orArray<string>
threshold integer The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. 25

MQTT

The frigate yaml configuration accepts the following properties

property type description default required
client_id string MQTT client ID – must be unique if you are running multiple instances "frigate"
host string MQTT host name or IP address
password string
port number MQTT port 1883
stats_interval number Interval in seconds for publishing Frigate internal stats to MQTT. Available at #/<topic_prefix>/stats 60
topic_prefix string MQTT topic prefix – must be unique if you are running multiple instances "frigate"
user string

Examples

mqtt:
  host: 10.0.1.123
  password: '{FRIGATE_MQTT_PASSWORD}'
  user: '{FRIGATE_MQTT_USER}'

Objects

Track specific objects and apply filters to each

The frigate yaml configuration accepts the following properties

property type description default
filters object see filters {}
track Array<string> ["person"]

Filters

The configuration accepts the following properties

property type description default
max_area integer maximum width * height of the bounding box for the detected object 24000000
min_area integer minimum width * height of the bounding box for the detected object
min_score number minimum score for the object to initiate tracking 0.5
threshold number minimum decimal percentage for tracked object's computed score to be considered a true positive 0.7
Examples
filters:
  dog:
    max_area: 20000
    threshold: 0.8
  person:
    min_area: 40000

24/7 Recordings

24/7 recordings can be enabled and are stored at /media/frigate/recordings. The folder structure for the recordings is YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4. These recordings are written directly from your camera stream without re-encoding and are available in HomeAssistant's media browser. Each camera supports a configurable retention policy in the config.

The frigate yaml configuration accepts the following properties

property type description default
enabled boolean
retain_days integer 30

Snapshots

Frigate can save a snapshot image to /media/frigate/clips for each event named as <camera>-<id>.jpg.

The frigate yaml configuration accepts the following properties

property type description default
enabled boolean
retain object see retain {}

Examples

snapshots:
  enabled: true
  retain:
    default: 5
    objects:
      person: 10
      dog: 1
      car: 5

Event retention

The snapshots accepts the following properties

property type description default
default number Number of days to keep events in the database 10
objects object see objects

Examples

retain:
  default: 10
retain:
  default: 3
  objects:
    person: 15
    dog: '1'

Object-specific retention

Configure event retention differently for specific tracked objects

type description
number Number of days
{
"$id": "https://blakeblackshear.github.io/frigate",
"$schema": "http://json-schema.org/draft-07/schema#",
"additionalProperties": false,
"definitions": {
"detect": {
"$id": "#detect",
"properties": {
"max_disappeared": {
"default": 25,
"title": "Max frames for object gone",
"description": "Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)",
"type": "integer"
}
},
"type": "object"
},
"ffmpeg": {
"$id": "#ffmpeg",
"description": "Configure FFMPEG process arguments",
"properties": {
"global_args": {
"$ref": "#/definitions/string-or-array",
"default": "-hide_banner -loglevel warning",
"description": "Arguments to apply by default to all processes",
"title": "Global arguments"
},
"hwaccel_args": {
"$ref": "#/definitions/string-or-array",
"default": "",
"description": "Hardware acceleration arguments. These are dependent on your system. For more information, see [Hardware acceleration](https://blakeblackshear.github.io/frigate/)",
"title": "Hardware acceleration args"
},
"input_args": {
"$ref": "#/definitions/string-or-array",
"default": "-avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1",
"title": "Input arguments"
},
"output_args": {
"description": "FFMPEG arguments for the ffmpeg process for each video role",
"properties": {
"clips": {
"description": "Output args for clip streams",
"$ref": "#/definitions/string-or-array",
"default": "-f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
},
"detect": {
"description": "Output args for detect streams",
"$ref": "#/definitions/string-or-array",
"default": "-f rawvideo -pix_fmt yuv420p"
},
"record": {
"description": "Output args for record streams",
"$ref": "#/definitions/string-or-array",
"default": "-f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an"
},
"rtmp": {
"description": "Output args for rtmp streams",
"$ref": "#/definitions/string-or-array",
"default": "-c copy -f flv"
}
},
"title": "Output argumements",
"type": "object"
}
},
"title": "FFMPEG arguments",
"type": "object"
},
"filters": {
"$id": "#filters",
"additionalProperties": {
"default": {
"max_area": 24000000,
"min_area": 0,
"min_score": 0.5,
"threshold": 0.7
},
"properties": {
"max_area": {
"default": 24000000,
"description": "maximum `width * height` of the bounding box for the detected object",
"type": "integer"
},
"min_area": {
"default": 0,
"description": "minimum `width * height` of the bounding box for the detected object",
"type": "integer"
},
"min_score": {
"default": 0.5,
"description": "minimum score for the object to initiate tracking",
"type": "number"
},
"threshold": {
"default": 0.7,
"description": "minimum decimal percentage for tracked object's computed score to be considered a true positive",
"type": "number"
}
},
"examples": [
{
"filters": {
"dog": {
"max_area": 20000,
"threshold": 0.8
},
"person": {
"min_area": 40000
}
}
}
],
"title": "Filters",
"type": "object"
},
"default": {},
"properties": {},
"type": "object"
},
"log-level": {
"enum": ["debug", "info", "warning", "error", "critical"],
"type": "string"
},
"mask": {
"$ref": "#/definitions/string-or-array"
},
"motion": {
"$id": "#motion",
"title": "Motion detection",
"description": "Advanced configuration to change the sensitivity of motion detection.",
"properties": {
"contour_area": {
"default": 100,
"description": "Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects.",
"type": "integer"
},
"delta_alpha": {
"default": 0.2,
"description": "Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames. Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion. Too low and a fast moving person wont be detected as motion.",
"type": "number"
},
"frame_alpha": {
"default": 0.2,
"description": "Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background. Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster. Low values will cause things like moving shadows to be detected as motion for longer. [More info](https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/).",
"type": "number"
},
"frame_height": {
"default": 100,
"description": "Height of the resized motion frame (default: 1/6th of the original frame height). This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.",
"type": "integer"
},
"mask": {
"$ref": "#/definitions/mask"
},
"threshold": {
"default": 25,
"description": "The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion.\nIncreasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.",
"maximum": 255,
"minimum": 0,
"type": "integer"
}
},
"title": "Motion",
"type": "object"
},
"objects": {
"$id": "#objects",
"additionalProperties": false,
"default": {},
"title": "Object tracking & filters",
"description": "Track specific objects and apply filters to each",
"properties": {
"filters": {
"$ref": "#/definitions/filters",
"default": {}
},
"track": {
"default": ["person"],
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
},
"retain": {
"$id": "#retain",
"examples": [
{
"retain": {
"default": 10
}
},
{
"retain": {
"default": 3,
"objects": { "person": 15, "dog": "1" }
}
}
],
"title": "Event retention",
"properties": {
"default": {
"default": 10,
"type": "number",
"description": "Number of days to keep events in the database"
},
"objects": {
"additionalProperties": {
"type": "number",
"title": "Number of days"
},
"properties": {},
"type": "object",
"description": "Configure event retention differently for specific tracked objects",
"title": "Object-specific retention"
}
},
"type": "object"
},
"snapshots": {
"additionalProperties": false,
"properties": {
"bounding_box": {
"default": true,
"title": "Bounding box",
"description": "Draw bounding box on the snapshots",
"type": "boolean"
},
"crop": {
"title": "Crop",
"description": "Crop the snapshot",
"default": true,
"type": "boolean"
},
"enabled": {
"title": "Enabled",
"description": "Enable writing jpg snapshot to `/media/frigate/clips`. This value can be set via MQTT and will be updated in startup based on retained value",
"default": true,
"type": "boolean"
},
"height": {
"title": "Height",
"description": "Height to resize the snapshot to",
"default": 270,
"type": "integer"
},
"required_zones": {
"default": [],
"items": {
"type": "string"
},
"title": "Required zones",
"description": "Restrict snapshots to objects that entered any of the listed zones",
"type": "array",
"uniqueItems": true
},
"timestamp": {
"title": "Timestamp",
"description": "Print a timestamp on the snapshots",
"default": true,
"type": "boolean"
}
},
"title": "Snapshots",
"$id": "#snapshots",
"type": "object"
},
"string-or-array": {
"$id": "#string-or-array",
"oneOf": [
{
"type": "string",
"title": "Single value"
},
{
"items": {
"type": "string",
"title": "String"
},
"title": "Multiple values",
"type": "array"
}
]
}
},
"description": "HassOS users must manage their configuration from the Home Assistant file editor by creating a file in the Home Assistant config directory called `frigate.yml`. For other installations, the default location for the config file is `/config/config.yml`. This can be overridden with the `CONFIG_FILE` environment variable. Camera specific ffmpeg parameters are documented here.\n> NOTE: Environment variables that begin with `FRIGATE_` may be referenced in `{}`. eg `password: '{FRIGATE_MQTT_PASSWORD}`",
"examples": [
{
"detectors": {
"coral": {
"device": "usb",
"type": "edgetpu"
}
},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{
"path": "rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2",
"roles": ["detect", "rtmp"]
}
]
},
"fps": 5,
"height": 720,
"width": 1280
}
},
"mqtt": {
"host": "mqtt.server.com"
}
}
],
"properties": {
"cameras": {
"additionalProperties": {
"properties": {
"best_image_timeout": {
"default": 60,
"type": "integer",
"title": "Best image timeout",
"description": "Timeout (in seconds) for highest scoring image before allowing it to be replaced by a newer image."
},
"clips": {
"additionalProperties": false,
"default": {},
"properties": {
"enabled": {
"description": "enables clips for the camera. This value can be set via MQTT and will be updated in startup based on retained value",
"default": true,
"type": "boolean"
},
"objects": {
"description": "Objects to save clips for. (default: all tracked objects)",
"default": ["person"],
"items": {
"type": "string",
"title": "Object name"
},
"type": "array",
"title": "Objects",
"uniqueItems": true
},
"post_capture": {
"description": "Number of seconds after the event to include in the clips",
"default": 5,
"title": "Post capture time",
"type": "integer"
},
"pre_capture": {
"description": "Number of seconds before the event to include in the clips",
"default": 5,
"title": "Pre capture time",
"type": "integer"
},
"required_zones": {
"description": "Restrict clips to objects that entered any of the listed zones (default: no required zones)",
"default": [],
"items": {
"type": "string",
"title": "Zone name"
},
"type": "array",
"title": "Required zones",
"uniqueItems": true
},
"retain": {
"description": "Camera override for retention settings (default: global values)",
"$ref": "#/definitions/retain",
"default": {}
}
},
"required": ["enabled"],
"title": "Clips",
"description": "Overrides and camera-specific settings for saving clips. For more information about clips, see [Clips](#properties-clips)",
"type": "object"
},
"ffmpeg": {
"description": "Up to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.",
"allOf": [
{
"properties": {
"inputs": {
"type": "array",
"items": {
"type": "object",
"allOf": [
{
"properties": {
"path": {
"type": "string",
"title": "RTSP/RTMP feed URI",
"description": "The full URI to the stream"
},
"roles": {
"items": {
"enum": ["detect", "rtmp", "record", "clips"],
"type": "string",
"title": "Role"
},
"description": "List of roles that Frigate should use on this stream",
"title": "Roles",
"type": "array",
"uniqueItems": true,
"description": "Each role can only be assigned to one input per camera."
}
},
"required": ["path", "roles"],
"type": "object"
},
{
"$ref": "#/definitions/ffmpeg"
}
]
},
"type": "array"
}
},
"required": ["inputs"],
"type": "object",
"examples": [
{
"ffmpeg": {
"inputs": [
{ "path": "rtsp://10.0.1.12:554/live", "roles": ["detect", "rtmp", "clips", "record"] }
]
}
},
{
"ffmpeg": {
"inputs": [
{ "path": "rtsp://10.0.1.12:554/sub_stream", "roles": ["detect", "rtmp"] },
{ "path": "rtsp://10.0.1.12:554/hq_stream", "roles": ["clips", "record"] }
]
}
}
]
},
{
"$ref": "#/definitions/ffmpeg"
}
],
"type": "object",
"title": "Camera ffmpeg inputs"
},
"fps": {
"descripton": "Desired fps for your camera for the input with the detect role. Recommended value of 5. Ideally, try to reduce the FPS output from the camera. Frigate will attempt to autodetect if not specified.",
"type": "integer"
},
"height": {
"descripton": "Height of the ffmpeg input used for the `detect` role.",
"type": "integer"
},
"motion": {
"additionalProperties": false,
"default": {},
"title": "Motion",
"description": "Camera level motion configuration",
"examples": [
{
"motion": {
"mask": "0,900,1080,900,1080,1920,0,1920"
}
},
{
"motion": {
"mask": ["0,0,0,100,100,100,100,0", "900,80,900,150,820,150,820,80"]
}
}
],
"properties": {
"mask": {
"$ref": "#/definitions/mask",
"description": "Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with _Motion Boxes_ enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with _Motion Boxes_ enabled again."
}
},
"title": "Motion",
"type": "object"
},
"mqtt": {
"default": {},
"$ref": "#/definitions/snapshots",
"title": "MQTT",
"type": "object"
},
"objects": {
"additionalProperties": false,
"default": {},
"properties": {
"filters": {
"additionalProperties": {
"additionalProperties": false,
"properties": {
"filters": {
"$ref": "#/definitions/filters"
},
"mask": {
"description": "Object filter masks are used to filter out false positives for a given object type. These should be used to filter any areas where it is not possible for an object of that type to be. The bottom center of the detected object's bounding box is evaluated against the mask. If it is in a masked area, it is assumed to be a false positive. For example, you may want to mask out rooftops, walls, the sky, treetops for people. For cars, masking locations other than the street or your driveway will tell frigate that anything in your yard is a false positive.",
"$ref": "#/definitions/mask"
}
},
"type": "object"
},
"default": {},
"properties": {},
"title": "Object filters",
"type": "object"
},
"mask": {
"examples": ["0,0,1000,0,1000,200,0,200"],
"type": "string"
},
"track": {
"default": ["person"],
"items": {
"type": "string"
},
"type": "array"
}
},
"title": "Objects",
"type": "object"
},
"record": {
"additionalProperties": false,
"default": {},
"properties": {
"enabled": {
"default": false,
"type": "boolean"
},
"retain_days": {
"default": 30,
"type": "integer"
}
},
"title": "Recording",
"type": "object"
},
"rtmp": {
"additionalProperties": false,
"default": {},
"properties": {
"enabled": {
"default": true,
"type": "boolean"
}
},
"title": "RTMP re-stream",
"type": "object"
},
"snapshots": {
"additionalProperties": false,
"default": {},
"allOf": [{ "$ref": "#/definitions/snapshots" }, { "$ref": "#/definitions/retain" }],
"title": "Snapshots",
"type": "object"
},
"width": {
"descripton": "Width of the ffmpeg input used for the `detect` role.",
"type": "integer"
},
"zones": {
"additionalProperties": {
"additionalProperties": {
"coordinates": {
"$ref": "#/definitions/string-or-array"
},
"filters": {
"$ref": "#/definitions/filters",
"default": {}
}
},
"default": {},
"required": ["coordinates"],
"type": "object"
},
"default": {},
"description": "Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.\n\nDuring testing, draw_zones should be set in the config to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.\n\nTo create a zone, follow the same steps above for a \"Motion mask\", but use the section of the web UI for creating a zone instead.",
"properties": {},
"title": "Zones",
"type": "object"
}
},
"required": ["ffmpeg", "width", "height"],
"title": "Individual camera config",
"type": "object"
},
"description": "Each of your cameras must be configured. The following is the minimum required to register a camera in Frigate.",
"examples": [
{
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{
"path": "rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=1",
"roles": ["detect", "rtmp"]
},
{
"path": "rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live?channel=0",
"roles": ["clips", "record"]
}
]
},
"width": 1280,
"height": 720,
"fps": 5
}
}
}
],
"properties": {},
"title": "Cameras",
"type": "object"
},
"clips": {
"additionalProperties": false,
"default": {},
"description": "Frigate can save video clips without any CPU overhead for encoding by simply copying the stream directly with FFmpeg. It leverages FFmpeg's segment functionality to maintain a cache of video for each camera. The cache files are written to disk at `/tmp/cache` and do not introduce memory overhead. When an object is being tracked, it will extend the cache to ensure it can assemble a clip when the event ends. Once the event ends, it again uses FFmpeg to assemble a clip by combining the video clips without any encoding by the CPU. Assembled clips are are saved to `/media/frigate/clips`. Clips are retained according to the retention settings defined on the config for each object type.\n\nThese clips will not be playable in the web UI or in HomeAssistant's media browser unless your camera sends video as h264.",
"properties": {
"max_seconds": {
"default": 300,
"description": "Maximum length of time to retain video during long events. If an object is being tracked for longer than this amount of time, the cache will begin to expire and the resulting clip will be the last x seconds of the event.",
"type": "integer"
},
"retain": {
"$ref": "#/definitions/retain",
"default": {}
},
"tmpfs_cache_size": {
"description": "Size of tmpfs mount to create for cache files, eg `mount -t tmpfs -o size={tmpfs_cache_size} tmpfs /tmp/cache`.\n\n> Addon users must have Protection mode disabled for the addon when using this setting. \n\n> Also, if you have mounted a tmpfs volume through docker, this value should not be set in your config.",
"examples": ["256m", "512m"],
"type": "string"
}
},
"title": "Clips",
"type": "object"
},
"database": {
"additionalProperties": false,
"default": {},
"description": "Event and clip information is managed in a sqlite database at `/media/frigate/clips/frigate.db`. If that database is deleted, clips will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within HomeAssistant.\n\nIf you are storing your clips on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.\n\nThis may need to be in a custom location if network storage is used for clips.",
"properties": {
"path": {
"title": "Database path",
"description": "Absolute path to the location of your Frigate database",
"type": "string"
}
},
"title": "Database",
"type": "object"
},
"detect": {
"$ref": "#/definitions/detect",
"default": {},
"description": "Global object detection settings. These may also be defined at the camera level.",
"title": "Detect"
},
"detectors": {
"additionalProperties": {
"properties": {
"device": {
"default": "usb",
"description": "Device name as accepted via the [TensorFlow API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api)",
"type": "string"
},
"num_threads": {
"default": 3,
"description": "num_threads value passed to the tflite.Interpreter. _Note: this value is only used for CPU types",
"type": "number"
},
"type": {
"default": "edgetpu",
"enum": ["edgetpu", "cpu"],
"description": "Hardware detection device type",
"type": "string"
}
},
"required": ["type"],
"title": "Detector properties",
"type": "object"
},
"default": {
"coral": {
"device": "usb",
"type": "edgetpu"
}
},
"description": "The default config will look for a USB Coral device. If you do not have a Coral, you will need to configure a CPU detector. If you have PCI or multiple Coral devices, you need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.\n\nFrigate supports edgetpu and cpu as detector types. The device value should be specified according to the Documentation for the TensorFlow Lite Python API.\n\n> Note: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.",
"examples": [
{
"detectors": {
"coral": {
"device": "usb",
"type": "edgetpu"
}
}
},
{
"detectors": {
"coral1": {
"device": "usb:0",
"type": "edgetpu"
},
"coral2": {
"device": "usb:0",
"type": "edgetpu"
}
}
}
],
"properties": {},
"title": "Detectors",
"type": "object"
},
"environment_vars": {
"additionalProperties": {
"type": "string",
"title": "ENV var value"
},
"default": {},
"description": "This section can be used to set environment variables for those unable to modify the environment of the container (ie. within Hass.io).\n> IMPORTANT: You will need to restart Frigate for any changes to take effect",
"examples": [
{
"environment_vars": {
"EXAMPLE_VAR": "value"
}
}
],
"properties": {},
"title": "Environment variables",
"type": "object"
},
"ffmpeg": {
"$ref": "#/definitions/ffmpeg",
"description": "Arguments to apply as a default to all FFMPEG processes",
"title": "ffmpeg"
},
"logger": {
"additionalProperties": false,
"description": "Change the default log levels for troubleshooting purposes.\n\n> NOTE: all ffmpeg logs are sent as `error` level by default.",
"examples": [
{
"logger": {
"default": "warning",
"logs": {
"detector.<detector_name>": "debug",
"ffmpeg.<camera_name>.<role>": "error",
"frigate.app": "debug",
"frigate.edgetpu": "info",
"frigate.mqtt": "debug",
"watchdog.<camera_name>": "debug"
}
}
}
],
"properties": {
"default": {
"$ref": "#/definitions/log-level",
"default": "info"
},
"logs": {
"additionalProperties": {
"$ref": "#/definitions/log-level"
},
"properties": {},
"type": "object"
}
},
"title": "Logger",
"type": "object"
},
"model": {
"additionalProperties": false,
"default": {},
"properties": {
"height": {
"default": 320,
"type": "integer"
},
"width": {
"default": 320,
"type": "integer"
}
},
"required": ["width", "height"],
"title": "Model",
"type": "object"
},
"motion": {
"$ref": "#/definitions/motion",
"default": {},
"title": "Motion"
},
"mqtt": {
"additionalProperties": false,
"default": {},
"examples": [
{
"mqtt": {
"host": "10.0.1.123",
"password": "{FRIGATE_MQTT_PASSWORD}",
"user": "{FRIGATE_MQTT_USER}"
}
}
],
"properties": {
"client_id": {
"default": "frigate",
"description": "MQTT client ID – must be unique if you are running multiple instances",
"type": "string"
},
"host": {
"description": "MQTT host name or IP address",
"type": "string"
},
"password": {
"type": "string"
},
"port": {
"default": 1883,
"description": "MQTT port",
"type": "number"
},
"stats_interval": {
"default": 60,
"description": "Interval in seconds for publishing Frigate internal stats to MQTT. Available at `#/<topic_prefix>/stats`",
"type": "number"
},
"topic_prefix": {
"default": "frigate",
"description": "MQTT topic prefix – must be unique if you are running multiple instances",
"type": "string"
},
"user": {
"type": "string"
}
},
"required": ["host"],
"title": "MQTT",
"type": "object"
},
"objects": {
"$ref": "#/definitions/objects",
"title": "Objects",
"default": {}
},
"record": {
"additionalProperties": false,
"default": {},
"description": "24/7 recordings can be enabled and are stored at /media/frigate/recordings. The folder structure for the recordings is YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4. These recordings are written directly from your camera stream without re-encoding and are available in HomeAssistant's media browser. Each camera supports a configurable retention policy in the config.",
"properties": {
"enabled": {
"default": false,
"type": "boolean"
},
"retain_days": {
"default": 30,
"type": "integer"
}
},
"title": "24/7 Recordings",
"type": "object"
},
"snapshots": {
"additionalProperties": false,
"default": {},
"description": "Frigate can save a snapshot image to `/media/frigate/clips` for each event named as `<camera>-<id>.jpg`.",
"examples": [
{
"snapshots": {
"enabled": true,
"retain": {
"default": 5,
"objects": {
"person": 10,
"dog": 1,
"car": 5
}
}
}
}
],
"properties": {
"enabled": {
"defualt": true,
"type": "boolean"
},
"retain": {
"$ref": "#/definitions/retain",
"default": {}
}
},
"title": "Snapshots",
"type": "object"
}
},
"required": ["mqtt", "detectors", "cameras"],
"title": "Frigate yaml configuration",
"type": "object"
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment