What factors should you consider for optimal video analytics performance?

Timo Sachse

AI continues to be hailed as the technology that will augment and improve human performance in many sectors and the video surveillance industry is no exception. AI-based analytics is increasingly being used to quickly process large amounts of data and trigger actions. These functions help to support a security team when monitoring large and changing scenes – such as on a motorway or perimeters – identifying objects of interest and flagging those which require an action.

In theory, this sounds ideal and highly beneficial, but when the technology is deployed there are a number of factors which must be examined to ensure high quality results. These include camera hardware, video quality, illumination level, as well as camera configuration, position, and direction.

Does the camera environment and position support its function?

Image quality is said to depend on high resolution and high light sensitivity of the camera, but there are other factors that are just as influential for the actual usability of an image or a video. For example, the best quality video stream from the most expensive surveillance camera can be useless if the scene is not sufficiently lit at night, if the camera has been redirected, or if the system connection is broken.

The placement of the camera should be carefully considered before deployment. For video analytics to perform as expected, the camera needs to be positioned to enable a clear view, without obstacles, of the intended scene. Image usability may also depend on the use case. Video that looks good to a human eye may not have the optimal quality for the performance of a video analytics application. In fact, many image processing methods – such as noise reduction methods – that are commonly used to enhance video appearance for human viewing, are less optimal when using video analytics.

Modern cameras often come with integrated IR illumination which enables them to work in complete darkness. This is positive as it may enable cameras to be placed on difficult-light sites and reduce the need for installing additional illumination. However, if heavy rain or snowfall is expected on a site, it is highly recommended not to rely on light coming from the camera or from a location very close to the camera, due to problems with reflections.

Is the camera at the right distance from the scene?

It is difficult to determine a maximum detection distance of an AI-based analytics application — an exact datasheet value in meters or feet can never be the whole truth. Image quality, scene characteristics, weather conditions, and object properties such as color and brightness have a significant impact on the detection distance.

This also depends on the speed of the objects to be detected. To achieve accurate results, a video analytics application needs to “see” the object during a sufficiently long period of time. How long that period needs to be depends on the processing performance (framerate) of the platform: the lower the processing performance, the longer the object needs to be visible in order to be detected. If the camera’s shutter time is not well matched with the object speed, motion blur in the image may also lower the detection accuracy.

Fast objects may be more easily missed if they are passing by closer to the camera. A running person located far from the camera, for example, might be well detected, while a person running very close to the camera at the same speed may be in and out of the field of view so quickly that no alarm is triggered.

In analytics-based on movement detection, objects moving directly towards the camera, or away from it, present another challenge. Detection will be especially difficult for slow-moving objects, which will only cause very small changes in the image, compared to movement across the scene.

How are the alarms and recording set up?

Object analytics perform optimally only when their listed preconditions are met. In other cases, they might miss important events. If it’s not absolutely certain that all conditions will be met at all times, it is recommended to take a conservative approach and set up the system so that a specific object classification is not the only alarm trigger. This will cause more false alarms, but also reduce the risk of missing something important.

There is an obvious need for a reliable object classification to filter out unwanted alarms. But the recording solution should be set up to rely on other factors in addition to the object classification. In the case of a missed real alarm, this setup allows you to assess, from the recording, the reason for missing the alarm and then to improve the overall installation and configuration.

How well is the solution maintained?

It’s essential for the surveillance installation to be regularly maintained. Physical inspections, and not only viewing the video through the Video Management Software (VMS) interface, is recommended in order to discover and remove anything that might be blocking the field of view. This is important also in standard, recording-only installations, but is even more critical when using analytics.

In the context of basic video motion detection, a typical obstacle such as a spider’s web that sways in the wind could increase the number of false alarms, resulting in a higher storage consumption than necessary. With object analytics, the web would basically create an exclude zone in the detection area. Its threads would obscure objects and greatly reduce the chance of detection and classification.

Dirt on the front glass or bubble of the camera is unlikely to cause problems during daytime. But in low-light conditions, light that hits a dirty bubble from the side, for example from the headlights of a car, can cause unexpected reflections that may decrease detection accuracy.

Scene-related maintenance is equally important as camera maintenance. A simple before-and-after image comparison will reveal potential problems. What did the scene look like when the camera was deployed and what does it look like today? Is there a need to adjust the detection zone? Should the camera’s field of view be adjusted, or should the camera be moved to a different location?

A solution which consistently performs optimally

Investing in video analytics will yield many security benefits if implemented correctly and assessed regularly. As there are a number of factors which can affect performance, security personnel must remember that these solutions don’t fall under the category of ‘set up and forget’. Instead, an approach with utilises continuous evaluation will be needed to ensure the end results meet the business’s objectives and provide a good ROI.

To find out more, download the whitepaper ‘AI in video analytics: Considerations for analytics based on machine learning and deep learning’.

Download whitepaper