Artificial intelligence on the edge: the opportunities for video surveillance

In the world of video surveillance, one of the primary benefits of edge computing will be the ability to undertake advanced analytics using artificial intelligence (AI) and deep learning within cameras themselves.

Firstly, what do we mean by “on the edge”?

The number of devices on the edge of our security networks is growing and they are playing an ever-more critical role in our safety and security. Edge computing means building more capability onto the connected device itself, so information processing power sits as close to the source as possible.

For a video surveillance network, this means more actions can be carried out on the cameras themselves. The role of artificial intelligence (AI), machine learning and deep learning in video surveillance is growing, so we’re able to ‘teach’ our cameras to be far more intuitive about what they are filming and analyzing in real-time. For example, is the vehicle in the scene a car, a bus, or a truck? Is that a human or animal by the building? Are those shadows or an object in the road?

These insights will reduce the burden on the human input required to analyze data and make decisions. Ultimately, it should speed up response times – potentially saving lives – and provide valuable insights that can shape the future of our buildings, cities and transportation systems.

How can we transform video surveillance on the edge?

Currently, most edge analysis of surveillance camera footage simply shows that something or someone is moving. After this analysis by video management systems (VMS) on centralized servers, it takes a human to interpret exactly what it is and if they present any threat or security risk.

To understand whether an object is a vehicle, a human, an animal or indeed pretty much anything, we can ‘train’ a camera system to detect and classify the object. This could lead us to understand an almost unlimited number of classes of objects and contexts.

Standard analytics would pick up that a vehicle has triggered an alert. With an intelligent deep learning layer on top of that you can go into even further detail: What type of vehicle is it? Is it in an area that will cause potential problems, or is it on the hard shoulder and out of immediate danger? Is it a bus that’s broken down and likely to endanger people as they disembark?

The benefits of analytics on the edge

The greater accuracy of edge analytics – and the ability to distinguish between multiple classes of object – immediately reduces the rate of false positives. With that comes a related reduction in time and resources to investigate these false positives. More proactively, edge analytics can create a more appropriate and timely response.

For example, running AI analytics on the edge could identify objects on a motorway and alert drivers. But the ability brought through deep learning to distinguish between a human and a vehicle can help define the level of severity of warning issued to drivers. If cameras saw that there was someone in danger on the road, they could automatically activate signage to slow traffic and alert emergency services.

Over time, developers behind analytics could see trends that would be of use not just for traffic management and planning but also for other agencies with an interest in wildlife behavior and conservation. Being able to differentiate the type of traffic – pedestrians, cyclists, motorists, commercial vehicles – provides valuable trends insights that help civil engineers plan the smart cities of the future.

Turning raw data into actionable analytics insight

Another key benefit of edge analytics is that the analysis is taking place on the highest-quality video footage, as close as it can be to the source. In a traditional model – when analytics takes place on a server – video is often compressed before being transferred, with the analysis therefore being undertaken on degraded quality video.

In addition, when analytics is centralized – taking place on a server – when more cameras are added to the solution, more data is transferred, and this creates the need to add more servers to handle the analytics. Deploying powerful analytics at the edge means that only the most relevant information is sent across the network, reducing the burden on bandwidth and storage.

Want to learn more about our fixed box cameras for premium performance and processing?

AXIS Q1615-LE Mk III Network Camera