A new era for video analytics
‘Analytics’ seems to be a word on everybody’s lips in the security and surveillance sector at the moment, Petra. But before we get into why that is, could you give us a brief history of Axis and analytics?
Of course. Everybody knows Axis as a company which designs and manufactures the highest quality surveillance cameras, as we have done since we created the first network surveillance camera in 1996. With our focus on addressing our customers’ needs and use cases for surveillance, it wasn’t long after we created that first network camera that we saw the opportunity to add analytics to the camera. It was clear that an ability to analyze video information from the camera and create some sort of action or alert based on the result would immediately make our customers’ security and surveillance operations more efficient.
In 2000 we added video motion detection to our cameras, alerting operators and triggering video recording when movement was detected in an otherwise static scene. A few years later, the introduction of the AXIS 242S video encoder was revolutionary in allowing analytics to be applied directly to the camera’s video feed and creating the opportunity for partner application development. In 2009 we established AXIS Camera Application Platform (ACAP), which provided more structure and the chance for a broader set of partners to develop analytics applications for our cameras.
Today, the processing power of cameras, which now increasingly enables machine and deep learning, has transformed analytics capabilities. The foundation for effective video analytics is high-quality cameras with high-performance processing capabilities. And for us that starts with the advantages of designing our own chip, ARTPEC.
Why is designing your own chip so important for analytics?
Put simply, we can design a chip that’s 100% optimized for network video, and along with a number of other benefits, that means a hardware platform that is ideally suited for video analytics.
Since the first ARTPEC chip was released in 1999, it has formed the basis for many innovations and enhancements Axis has delivered to the industry, such as Axis Lightfinder, Axis Wide Dynamic Range (WDR), Axis Zipstream. Essentially, ARTPEC provides the platform for delivering the highest-quality video, which is crucial for high-quality analytics.
Through designing our own chip, we have created surveillance cameras which are incredibly powerful processing devices, and which allow us to take full advantage of having increasingly intelligent analytics placed on the ‘edge’ of the network, within the cameras.
Analytics within the camera itself, close to the capture of the video, brings several benefits. It allows for faster alerts, decisions, and response without delays. Less bandwidth, storage and server infrastructure is consumed; thus, it scales better. Since the transmitted data can be limited, it also enables keeping sensitive data safe. And running analytics on the uncompressed video at the edge means that no information is lost in compression.
Analytics at the edge means greater efficiency and effectiveness in the system. Alongside this, modern development tools are enabling hybrid architectures, making the best use of edge, cloud- and on-premise-based server environments. This means that edge devices can do the lion’s share of the analytics, and the results can be combined with data from other sources and analyzed further on servers in the cloud or on-premise. Distributing processing throughout the system lowers costs and enables a better user experience with greater customer value.
While Axis has provided analytics for many years, is the growth in the area changing the nature of Axis as an organization?
I don’t think it’s changing our nature, and it certainly doesn’t change our vision of innovating for a smarter, safer world. The constant enhancements in our camera platform – with the ARTPEC chip at the foundation – enables us to develop more and improved embedded native analytics, as well as allowing our partners to create more applications. Again, this supports our goal of addressing customer needs and use cases, in security, safety and operational efficiency.
Our development engineers are constantly looking to expand the analytics capabilities of our platform. This relates to both enhancing and expanding our analytics at the edge, and to the greatly improved ability to search and analyze video after it has been captured. Both areas will see big steps forward in the near future. Much of this comes from the generation of more metadata at the edge alongside the video itself.
Ah, yes, ‘metadata’ - another term we’ve also been hearing a lot recently. Can you tell us what it is and why it is so useful?
In simple terms, metadata is data about other data. In video surveillance, metadata describes information about what is being viewed in the video. For instance, the classification of objects in the scene – including vehicles and people - and the attributes associated with those objects, such as colors of vehicles and clothing or the direction of travel.
This can be incredibly valuable in searching through vast amounts of video, potentially allowing operators to search using questions such as “find me all video in the business district containing a red car between 18.00 and 22.00 on Wednesday 25th March”. Looking forward, it will also be central to spotting patterns and trends which will be valuable in organizational planning.
The potential for business intelligence is almost infinitive, and we anticipate that our partners will explore this area and develop a significant number of applications that will lead to improvements in efficiency for our customers.
What do you see as the forthcoming developments that will move analytics forward?
I see innovation being driven by two factors, which are very much linked: further improvements in processing capabilities at the edge and with it the opportunity to create new analytics applications; and a focus on usability to ensure that the benefits of analytics are being fully realized.
We already have a number of cameras that benefit from deep learning-based analytics, or ‘artificial intelligence’ to use the broader term. With the next iteration of our ARTPEC chip, hardware-accelerated support for deep learning will find its way into far more of our cameras. As a result, cameras will be able to handle much higher throughputs in terms of image processing, compression and analytics at great power efficiency.
While customer surveillance solutions will commonly feature a hybrid mix of environments – edge, and servers on-premise and in the cloud - high-performance deep-learning analytics at the edge will become pervasive. Of course, we’re still nowhere near human-like levels of intelligence and we must caution that analytics still comes with limitations – but the more accurate detections and richer data outputs of deep learning-based analytics will become an even more valuable tool for human operators. In short, safety and security will be enhanced and operational efficiency improved.
That said, analytics will only deliver value if it is effectively implemented and used – and making the work of customers and system integrators as straightforward as possible is always our aim. Central to this is making analytics as quick and easy to configure and use as possible, and to provide open interfaces and supporting tools.
While Axis itself will obviously look to make best use of the analytics platform that our cameras provide, it is our global network of software development partners who will be the true multipliers of analytics innovation. Indeed, innovation in ACAP will make the Axis camera platform available to a greater number of developers than ever before. The potential for computer vision applications based on the Axis platform is almost endless, as we’re excited to see what the imagination of our partner community will create.