Generative AI
Generative AI can be used to automatically generate descriptions based on the thumbnails of your tracked objects. This helps with Semantic Search in Frigate by providing detailed text descriptions as a basis of the search query.
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the Explore view in the Frigate UI by clicking on a tracked object's thumbnail.
Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with FRIGATE_
.
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
cameras:
front_camera: ...
indoor_camera:
genai: # <- disable GenAI for your indoor camera
enabled: False
Ollama
Ollama allows you to self-host large language models and keep everything running locally. It provides a nice API over llama.cpp. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a docker container available.
Supported Models
You must use a vision capable model with Frigate. Current model variants can be found in their model library. At the time of writing, this includes llava
, llava-llama3
, llava-phi3
, and moondream
.
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Configuration
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
Google Gemini
Google Gemini has a free tier allowing 15 queries per minute to the API, which is more than sufficient for standard Frigate usage.
Supported Models
You must use a vision capable model with Frigate. Current model variants can be found in their documentation. At the time of writing, this includes gemini-1.5-pro
and gemini-1.5-flash
.
Get API Key
To start using Gemini, you must first get an API key from Google AI Studio.
- Accept the Terms of Service
- Click "Get API Key" from the right hand navigation
- Click "Create API key in new project"
- Copy the API key for use in your config
Configuration
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
Supported Models
You must use a vision capable model with Frigate. Current model variants can be found in their documentation. At the time of writing, this includes gpt-4o
and gpt-4-turbo
.
Get API Key
To start using OpenAI, you must first create an API key and configure billing.
Configuration
genai:
enabled: True
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.
Prompts can use variable replacements like {label}
, {sub_label}
, and {camera}
to substitute information from the tracked object as part of the prompt.
You are also able to define custom prompts in your configuration.
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
prompt: "Describe the {label} in these images from the {camera} security camera."
object_prompts:
person: "Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc)."
car: "Label the primary vehicle in these images with just the name of the company if it is a delivery vehicle, or the color make and model."
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
cameras:
front_door:
genai:
prompt: "Describe the {label} in these images from the {camera} security camera at the front door of a house, aimed outward toward the street."
object_prompts:
person: "Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc). If delivering a package, include the company the package is from."
cat: "Describe the cat in these images (color, size, tail). Indicate whether or not the cat is by the flower pots. If the cat is chasing a mouse, make up a name for the mouse."
Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - ChatGPT
- Gemini - Google AI Studio
- Ollama - Open WebUI