AXIS Object Analytics

Solution overview

About the application

AXIS Object Analytics detects, classifies, and counts moving objects, specifically humans or vehicles. You can set up scenarios with different conditions for detection, such as objects that move or stay longer than a set time within a predefined area or that cross a defined line. When objects are detected or counted, Axis network devices or third-party software can perform different actions, such as record video, play an audio message, or alert security staff.

Considerations

For best results, the camera must be correctly mounted. There are requirements on the scene, image and objects. The considerations in this chapter are generic. For product-specific considerations, see the user manual for your product at help.axis.com.

This image illustrates a correctly mounted camera.

  1. Mounting height
  2. Tilt
  3. Detection area
  4. Minimum detection distance
  5. Maximum detection distance

Mounting position

If you mount the camera so it looks down from above, it makes it difficult for the application to classify objects.

Tilt

The camera must be sufficiently oriented towards the ground so that the center of the image is below the horizon. Mount the camera so that the minimum detection distance is longer than half of the camera’s mounting height (minimum detection distance > camera mounting height / 2).

Detection area

An object’s point of detection must be inside the detection area. The point of detection of a human is at its feet, and of a vehicle it’s at its center.

Maximum detection distance

  • The maximum detection distance depends on:
  • Camera type and model

  • Camera lens. A higher focal range allows for a longer detection distance.

  • Weather. For example, heavy rain or snow can affect the detection distance and accuracy.

  • Light. Detection accuracy and range can be affected by insufficient illumination.

  • Camera load

We recommend you to use AXIS Site Designer to determine the maximum detection distance for different camera models at your site.

Roll

The camera’s roll angle must be nearly equal to zero. It means that the image should be level with the horizon.

Field of view

The camera’s field of view must be fixed.

Vibrations

The application tolerates small camera vibrations, but you get the best performance when the camera is not subject to vibrations.

Object size

For a human to be detected, the minimum height is 4% of the total image height. For a vehicle, the minimum height is 3% of the total image height. However, this requires perfect image conditions and no obstructions to the view. To minimize the risk of missed detections, we recommend a height of at least 8% for humans and 6% for vehicles.

Object visibility

Detection accuracy can be affected:

  • if objects are only partially visible due to, for example, foliage. It’s particularly important that characteristic features, such as legs or wheels, are visible.

  • when the scene is crowded with objects that frequently overlap each other. For example when there’s a traffic congestion, or in a parking lot.

Contrast

  • There needs to be a certain level of contrast between objects and the background. Fog, direct light shining on the camera, or an overly noisy image can cause contrast issues. You can increase the level of illumination and adjust the image settings to improve the level of contrast.
  • When you use a day-and-night camera with artificial lighting, we recommend at least 50 lux in the entire detection area.

  • When you use built-in IR lighting, the maximum detection distance depends on the camera and the environment.

Expected movement of objects in the scene

Objects that approach the camera in a straight line need to move for a longer time before they get detected compared to objects that move perpendicular to the camera’s field of view.

Human pose

Humans need to move in a somewhat upright position.

Object motion

Objects need to move within the scene for at least 2 seconds.

Recommended image settings

Before you start to use the application, we recommend you to turn on Forensic WDR and barrel distortion correction, if they are available for your camera.

The image to the right is an example of barrel distortion. Barrel distortion is a lens effect where straight lines appear increasingly bent closer to the edges of the frame.
  • Conditions where detections can be delayed or missed
  • Note

    These conditions are not relevant for radar-video fusion cameras.

  • Fog

  • Direct light shining on the camera

  • Inadequate light

  • Overly noisy image

  • Situations that can trigger false alarms
  • Partially hidden people or vehicles. For example, a small van that appears from behind a wall can look like a person since the vehicle is high and narrow.

  • Insects on the camera lens. Note that day-and-night cameras with infrared spots attract insects and spiders.

  • A combination of car headlights and heavy rain.

  • Human-size animals.

  • Strong light causing shadows.

Get started

  1. Log in to the device interface as an administrator and go to Apps > AXIS Object Analytics.

  2. Start the application and click Open.

  3. In the welcome screen, click Step-by-step to follow the recommended setup procedure.

  4. In Considerations, read through the information.

  5. Click + New scenario.

  6. Select what you want your scenario to do:

    • Object in area: Detects objects that move inside a defined area.

    • Line crossing: Detects objects that cross a defined line.

    • Time in area: Detects objects that stay in an area too long.

    • Crossline counting: Counts objects that cross a defined line.

    • Occupancy in area: Estimates the number of objects within a defined area at any given time.

  7. To learn more about the different scenarios, see Area scenarios and Line crossing scenarios.
  8. Select the type of object you want the application to detect.

    Read more about Classification of objects.

  9. For PTZ cameras, you can choose to restrict detection to a specific preset position. Select it from the list.

  10. Configure your scenario.

    To find out how to adjust the default line or include area, see Adjust virtual line or area.

  11. Verify your settings and click Finish.

You have now created a scenario. To rename or modify it, click Open.

To create more scenarios, click + New scenario.

Create a scenario: object in area
Create a scenario: time in area.
Create a scenario: occupancy in area.
Create a scenario: line crossing
Create a scenario: crossline counting

Adjust virtual line or area

  • To reshape a virtual line or area, click and drag one of the anchor points.

  • To move a virtual line or area, click and drag.

  • To remove a corner, right-click the corner.

  • Virtual line
  • To reset the virtual line to its default size, click Scene > Reset line.

  • To change the direction that objects should move to be detected, click Scene > Change trigger direction. The red arrows next to the line show the current direction. Actions trigger when objects cross the line in the direction of the arrows.

  • Area
  • To reset the include area to its default size, click Scene >.

  • To create an area inside the include area where you don’t want objects to be detected, click Scene > Add exclude area.

Configure the application

Modify a scenario

To modify a scenario, click Scenarios and click Open in the scenario card.

  • To rename the scenario, click .

  • To change what type of objects to detect, click Triggering objects.

  • Note

    If you select Any motion, the application doesn’t classify objects. Instead, the application detects any object that moves in the scene. It can, for example, be animals, swaying foliage, flags, or shadows. To ignore small objects or objects that only appear for a short time, you can use filters. For more information, see Filters.

  • In an object in area scenario: To allow objects to stay inside the include area for a certain time before the application sends an event, click Triggering objects and turn on Time in area. Set the allowed time.

    • You can use the advanced setting Keep the rule active as long as the object is tracked when you create a rule in the device’s web interface, and the rule has an action with the option "...while the rule is active". This will make the rule stay active as long as the object is tracked and within the include area, and not only for the duration of the alarm. For an example of how to set this up, see Record video when a human stays too long in an area.

  • To adjust the virtual line or area, click Scene.

  • In a crossline counting scenario:

    • To reset counts on a daily basis, click Crossline counting and turn on Reset counts at midnight.

    • To reset counts once, click Crossline counting and click Reset counts.

    • Note

      The application stores counting data for 35 days, regardless of your type of storage.

  • In an occupancy in area scenario:

    • To trigger actions based on occupancy levels in the area of interest, set up an Occupancy threshold.

    • To trigger actions when the occupancy threshold has been valid for a set time, set the number of seconds in Trigger action after set time.

Calibrate perspective

Note

It’s not possible to calibrate the perspective on all types of devices, for example certain panoramic cameras.

If the scene has a significant depth, you need to calibrate the perspective to remove false alarms due to small objects. During calibration, the application compares the height of the objects as they appear in the image with the actual heights of the corresponding physical objects. The application uses the calibrated perspective to calculate the object size.

Place vertical bars in the image to calibrate perspective. The bars represent physical objects at different distances from the camera.

  1. Go to Settings > Advanced > Perspective and click +.

  2. In the live view, choose two objects of the same, known height, that are located on the ground and at different distances from the camera.

    You can use, for example, fence poles or a human.

  3. Place the bars by the objects and adjust the length of each bar to the height of the object.

  4. Select the scenarios you want to apply the perspective to.

  5. Enter the height of the objects in Perspective bar height.

  6. Click Save.

Example

If there is a fence with 2 meter high poles extending from the camera towards the horizon, position the bars at the fence poles, adjust their lengths and enter 200 cm (6 ft 7 in) in the fields.

Important

Make sure the bars don’t overlap each other in height.

Add burnt-in metadata overlays to video streams

To show the event that was detected in the live and recorded video stream, turn on metadata overlay. When you turn on metadata overlay the application shows:

  • A rectangle around detected objects.

  • The area or line of the scenario where the object was detected.

  • For crossline counting: a table with the accumulated count per object type.

  • For occupancy in area: a table with the estimated count per object type at the given time.

If you turn on trajectories, the application also shows a line that outlines the path that an object has taken.

If several scenarios get triggered at the same time, overlays are shown for all of them in all streams with that selected resolution.

Important

The metadata overlays are burnt in alarm overlays to the selected resolution of the video stream. You can’t remove them from recorded video.

Note

If you use view areas, the metadata overlays only appear in the first view area. The default name of the first view area is View area 1.

  1. In the application’s webpage, go to Settings > Advanced and, depending on your camera:

    • Turn on Metadata overlay.

    • Under Metadata overlay, select in which resolution burnt-in metadata overlays should appear. You can only select one resolution and the setting applies to all scenarios.

  2. To show the path an object has taken, select Trajectories.

Restrict detection to a PTZ preset position

For PTZ cameras, you can restrict detection to a specific preset position.

  1. Go to Scenarios and click Open in a scenario card, or click + to create a new scenario.

  2. Click Scene and select a preset position from the list.

Note

Each time the preset position changes, the application needs to recalibrate. We recommend you to wait at least 15 seconds before you change between preset positions in a guard tour.

Set up rules for events

To learn more, check out our guide Get started with rules for events.

Record video when an object gets detected

This example explains how to set up the Axis device to record video to an SD card when the application detects an object.

  1. In the device’s web interface, go to Apps and make sure the application is started.

  2. To check that the SD card is mounted, go to System > Storage.

  3. Go to System > Events and add a rule.

  4. Type a name for the rule.

  5. In the list of conditions, under Application, select the application scenario. To trigger the same action for all scenarios, select Object Analytics: Any Scenario.

  6. In the list of actions, under Recordings, select Record video.

  7. In the list of storage options, select SD-DISK.

  8. Select a Camera and a Stream profile.

    To show metadata overlays, make sure you have turned it on in the application for the same resolution that is in the stream profile.

  9. Note

    We don’t recommend you to use a scenario with time in area to trigger recordings if the time an object is allowed to stay inside the include area is more than 30 seconds. The reason is that it’s challenging to use a prebuffer time longer than 30 seconds, which is required if you want to see what happened before the object was detected.

  10. If you want to start the recording before the object was detected, enter a Prebuffer time.

  11. Click Save.

  12. To test the rule, go to the application’s webpage and open the scenario. Click Test alarm. This generates an event, as if the scenario had triggered for real. If you have turned on metadata overlays, a red or blue rectangle will show.

Record video when a human stays too long in an area

This example explains how to set up an Axis device to record video to an SD card when the application detects a human that stays too long in a defined area.

  1. In the device’s web interface:
  2. Go to Apps and make sure that the application is started.

  3. Go to System > Storage and check that the SD card is mounted.

  4. In AXIS Object Analytics:
  5. In Scenarios, click + New scenario.

  6. Select Time in area and click Next.

  7. Select Human and click Next.

  8. Adjust the area of interest according to your needs.

  9. Under Time in area settings, set the time during which the human is allowed to stay in the area.

  10. Click Finish.

  11. Open the scenario you just created.

  12. Go to Triggering objects > Time in area > Advanced and click Keep the rule active as long as the object is tracked.

    This makes it possible to keep the rule that you create in the device’s web interface active as long as the object is tracked, and not only for the duration of the alarm.

  13. In the device’s web interface:
  14. Go to System > Events and add a rule.

  15. Type a name for the rule.

  16. In the list of conditions, under Application, select the application scenario.

  17. In the list of actions, under Recordings, select Record video while the rule is active.

  18. In the list of storage options, select SD-DISK.

  19. Select a Camera and a Stream profile.

    To show metadata overlays, make sure you have turned it on in the application for the same resolution that is in the stream profile.

  20. Note

    We don’t recommend you to use a scenario with time in area to trigger recordings if the time an object is allowed to stay inside the include area is more than 30 seconds. The reason is that it’s challenging to use a prebuffer time longer than 30 seconds, which is required if you want to see what happened before the object was detected.

  21. If you want to start the recording before the object was detected, enter a Prebuffer time.

  22. Click Save.

  23. In AXIS Object Analytics:
  24. To test the rule, open the scenario and click Test alarm. This generates an event, as if the scenario had triggered for real.

Send an email when 100 vehicles have passed

With crossline counting and the passthrough threshold functionality, you can get notified every time a user-defined number of objects have crossed the line.

This example explains how to set up a rule to send an email every time 100 vehicles have passed.

Before you start

  • Create an email recipient in the device interface.

  1. In AXIS Object Analytics:
  2. In Scenarios, click + New scenario.

  3. Select Crossline counting and click Next..

  4. Clear Human from the listed object types and click Next.

  5. Update the name of the scenario to Count vehicles.

  6. Adjust the virtual line according to your needs.

  7. Turn on Passthrough threshold.

  8. In Number of counts between events, type 100.

  9. Click Finish.

  10. In the device’s web interface:
  11. Go to System > Events and add a rule.

  12. Type a name for the rule.

  13. In the list of conditions, under Application, select Object Analytics: Count vehicles passthrough threshold reached.

  14. In the list of actions, under Notifications, select Send notification to email.

  15. Select a recipient from the list.

  16. Type a subject and a message for the email.

  17. Click Save.

Activate a strobe siren when more than 50 objects are in a defined area

With occupancy in area and the passthrough threshold functionality, you can trigger actions when a user-defined number of objects stay in an area.

This example explains how to connect a camera to AXIS D4100-E Network Strobe Siren over MQTT. When AXIS Object Analytics detects that more than 50 humans have stayed in a defined area for one minute, the camera will trigger an action that activates a profile in the strobe siren.

  • Before you start:
  • Create a profile in the strobe siren.

  • Set up an MQTT broker and get the broker’s IP address, username and password.

  1. In AXIS Object Analytics:
  2. In Scenarios, click + New scenario.

  3. Select Occupancy in area and click Next.

  4. Select Human and click Next.

  5. Update the name of the scenario to Max 50.

  6. Adjust the area of interest according to your needs.

  7. Turn on Occupancy threshold.

  8. Set Number of objects to More than 50.

  9. Set Trigger action after set time to 60 seconds.

  10. Click Finish.

  1. Set up the MQTT client in the camera’s web interface:
  2. Go to System > MQTT > MQTT client > Broker and enter the following information:

    • Host: Broker IP address

    • Client ID: For example Camera 1

    • Protocol: The protocol the broker is set to

    • Port: The port number used by the broker

    • The broker Username and Password

  3. Click Save and Connect.

  1. Create two rules for MQTT publishing in the camera’s web interface:
  2. Go to System > Events > Rules and add a rule.

    This rule will activate the strobe siren.

  3. Enter the following information:

    • Name: Threshold alarm

    • Condition: Applications: Max 50 threshold alarm changed.

    • Action: MQTT > Send MQTT publish message

    • Topic: Threshold

    • Payload: On

    • QoS: 0, 1 or 2

  4. Click Save.

  5. Add another rule with the following information:

    This rule will deactivate the strobe siren.

    • Name: No threshold alarm

    • Condition: Applications: Max 50 threshold alarm changed

      • Select Invert this condition.

    • Action: MQTT > Send MQTT publish message

    • Topic: Threshold

    • Payload: Off

    • QoS: 0, 1 or 2

  6. Click Save.

  1. Set up the MQTT client in the strobe siren’s web interface:
  2. Go to System > MQTT > MQTT client > Broker and enter the following information:

    • Host: Broker IP address

    • Client ID: Siren 1

    • Protocol: The protocol the broker is set to

    • Port: The port number used by the broker

    • Username and Password

  3. Click Save and Connect.

  4. Go to MQTT subscriptions and add a subscription.

    Enter the following information:

    • Subscription filter: Threshold

    • Subscription type: Stateful

    • QoS: 0, 1 or 2

  5. Click Save.

  1. Create a rule for MQTT subscriptions in the strobe siren’s web interface:
  2. Go to System > Events > Rules and add a rule.

  3. Enter the following information:

    • Name: Motion detected

    • Condition: MQTT > Stateful

    • Subscription filter: Threshold

    • Payload: On

    • Action: Light and siren > Run light and siren profile while the rule is active

    • Profile: Select the profile you want to be active.

  4. Click Save.

Learn more

Classification of objects

The application can classify two types of objects: humans and vehicles. The application shows a rectangle around classified objects. Objects classified as humans get a red rectangle, and objects classified as vehicles get a blue rectangle.

For cameras with deep learning, vehicles can be further categorized into trucks, buses, cars, and bikes.

If you use the time in area functionality, the rectangle is yellow until the time condition has been fulfilled. If the object then stays inside the include area for another 30 seconds, the rectangle becomes dashed.

Each classified object has a point of detection that the application uses to decide if an object is inside or outside an include area or when it crosses a virtual line. For a human, the point of detection is at its feet, and for a vehicle it's at its center. If a human's feet or a vehicle's center gets obstructed from the camera's view, the application makes an assumption of the location of the point of detection.

Note

We recommend you to take the assumed location of objects' point of detection into consideration when you draw the include area or virtual line.

For the best possible results:

  • At some point, the entire object needs to be visible in the scene.

  • The object needs to be in motion within the scene for at least 2 seconds.

  • For cameras with machine learning, humans need to move in a somewhat upright position. For cameras with deep learning, this is not a requirement.

  • The upper body of a human needs to be visible

  • Objects need to stand out from the background

  • Reduce motion blur.

Area scenarios

When you set up an Object in area scenario, the application detects objects that move inside a defined area. The defined area is called an include area.

With the scenario Time in area, you can set a time limit for how long an object is allowed to stay inside the include area before the application triggers an action. When an object enters the include area, the time counter starts. If the object leaves the include area before the set time limit is reached, the counter resets. It’s the object’s point of detection that must be inside the include area for the counter to keep counting. The time in area feature is suitable for areas where humans or vehicles are only supposed to stay for a short while, like tunnels or school yards after hours.

When you set up an Occupancy in area scenario, the application estimates how many objects that are inside the include area at any given time. An object counter displays the estimated number of objects currently in the include area. When an object enters or leaves the area, the object counter adjusts. Occupancy in area is suitable for areas where you would want to get an estimated count of one or several object types, such as parking lots.

Include area

The include area is the area where the application detects and counts selected object types. The application triggers actions for objects if its point of detection is inside the include area. The application ignores objects that are outside the include area.

Reshape and resize the area so that it only covers the part of the scene where you want to detect and count objects. If you use occupancy in area or the time in area functionality, it’s important to include parts of a scene that isn’t crowded with objects that frequently overlap each other. The default include area rectangle can be changed to a polygon with up to 10 corners.

Recommendation

If there’s a busy road or sidewalk close to the include area, draw the include area so that objects outside the include area don’t accidentally get detected. This means you should avoid drawing the include area too close to the busy road or sidewalk.

Exclude areas

An exclude area is an area inside the include area in which selected object types don’t get detected or counted. Use exclude areas if there are areas inside the include area that trigger a lot of unwanted actions. You can create up to 5 exclude areas.

Move, reshape, and resize the area so that it covers the desired part of the scene. The default rectangle can be changed to a polygon with up to 10 corners.

Recommendation

Place exclude areas inside the include area. Use exclude areas to cover areas where you don’t want to detect objects.

Line crossing scenarios

When you set up a Line crossing scenario, the application detects objects that cross a virtually defined line. With the Crossline counting scenario, the application detects and counts the objects that cross the virtual line.

The virtual line is a yellow line in the image. Objects of the selected type that cross the line in a certain direction get detected. The red arrows on the line show the current direction. Actions trigger when objects cross the line in the direction indicated by the arrows.

To trigger an action the object must cross the line. As shown in the illustration, the object’s point of detection must cross the line for the action to trigger. Objects that only touch the line don’t trigger actions.

  • In the illustration to the left, the man doesn’t trigger an action, as his point of detection has not yet crossed the line.

  • In the illustration to the right, the man triggers an action, as his point of detection has crossed the line.

For information about the point of detection, see Classification of objects.

Virtual line recommendations

  • Adjust the virtual line so that:
  • objects are unlikely to be waiting at the line.

  • objects are clearly visible in the image before they cross the line.

  • an object’s point of detection is likely to cross the line.

Integration

Set up alarms in AXIS Camera Station

This example explains how to set up a rule in AXIS Camera Station to alert the operator and record video that includes metadata overlays when AXIS Object Analytics detects an object.

Before you start

  1. Add the camera to AXIS Camera Station
  2. In AXIS Camera Station, add the camera. See the user manual for AXIS Camera Station.

  1. Create a device event trigger
  2. Click and go to Configuration > Recording and events > Action rules and click New.

  3. Click Add to add a trigger.

  4. Select Device event from the list of triggers and click Ok.

  5. In the Configure device event trigger section:

    • In Device, select the camera.

    • In Event, select one of the scenarios for AXIS Object Analytics.

    • In Trigger period, set an interval time between two successive triggers. Use this function to reduce the number of successive recordings. If an additional trigger occurs within this interval, the recording will continue and the trigger period starts over from that point in time.

  6. In Filters, set active to Yes.

  7. Click Ok.

  1. Create actions to raise alarms and record video
  2. Click Next.

  3. Click Add to add an action.

  4. Select Raise alarm from the list of actions and click Ok.

  5. Note

    The alarm message is what the operator sees when an alarm is raised.

  6. In the Alarm message section, enter an alarm title and description.

  7. Click Ok.

  8. Click Add to add another action.

  9. Select Record from the list of actions and click Ok.

  10. In the list of cameras, select the camera to use for recording.

  11. Important

    To include metadata overlays in the recording, make sure you select a profile with the same resolution as the one selected for metadata overlays in the application.

  12. Select a profile and set the prebuffer and postbuffer.

  13. Click Ok.

  1. Specify when the alarm is active
  2. Click Next.

  3. If you only want the alarm to be active during certain hours, select Custom schedule.

  4. Select a schedule from the list.

  5. Click Next.

  6. Enter a name for the rule.

  7. Click Finish.

Note

To see the metadata overlays in the live view, make sure you select the streaming profile that matches the one you set in the application.

Integration of counting data

The crossline counting and occupancy in area scenarios produce metadata about counted objects. To visualize the data and analyze trends over time, you can set up an integration to a third-party application. With this method, it’s possible to present data from one or several cameras. To learn more about how to set up the integration, see the guidelines at Axis Developer Community.

Troubleshooting

Problems detecting objects

... when image is unstable

Turn on Electronic image stabilization (EIS) in the Image tab of the product’s webpage.

... at image edges, where the image looks distorted

Turn on Barrel distortion correction (BDC) in the Image tab of the product’s webpage.

... immediately

Objects need to be fully visible in the scene before they can be detected by the application.

... in other situations

It could be because the objects melt into the background if they are the same color, or because there is bad light in the scene. Try to improve the light.

Problems with false alarms

... due to small animals that appear large in the image

Calibrate the perspective. See Calibrate perspective.

Problems counting objects

... due to stationary objects that look like humans or vehicles when you use occupancy in area

Objects need to be fully visible in the scene. The application counts both moving and stationary objects in occupancy in area scenarios, which increases the risk of false detections. Add an exclude area to ignore stationary objects that look like humans or vehicles.

Problems with metadata overlays

... on a second client

Metadata overlays are only visible for one client at a time.

Problems with the video stream

... on Firefox browser for cameras with high resolutions

Try Google Chrome™ browser instead.

Filters

If you have set up the application to detect any motion, you may experience false alarms. You can then use filters.

Short-lived objects –
Use this to ignore objects that only appear in the image for a short period of time.
Small objects –
Use this to ignore small objects.
Swaying objects –
Use this to ignore objects that only move a short distance.

Filter recommendations

  • Filters are applied to all moving objects found by the application and should be set up with care to make sure that no important objects are ignored.

  • Set up one filter at a time and test it before you turn on another filter.

  • Change the filter settings carefully until you’ve reached the desired result.

The short-lived objects filter

Use the short-lived objects filter to avoid detecting objects that only appear for a short period of time, such as light beams from a passing car or quickly moving shadows.

When you turn on the short-lived objects filter and the application finds a moving object, the object doesn’t trigger an action until the set time has passed. If the action is to start a recording, configure the pre-trigger time so that the recording also includes the time the object moved in the scene before it triggered the action.

Set up the short-lived objects filter

  1. Click Scenarios and select an existing scenario or click + to create a new scenario.

  2. Click Triggering objects and make sure Any motion is selected.

  3. Go to Filters > Short-lived objects.

  4. Enter the number of seconds in the field. The number of seconds is the minimum time that must pass before the object triggers an action. Start with a small number.

  5. If the result is not satisfactory, increase the filter time in small steps.

The swaying object filter

The swaying objects filter ignores objects that only move a short distance, for example swaying foliage, flags, and their shadows. If the swaying objects are large, for example large ponds or large trees, use exclude areas instead of the filter. The filter is applied to all detected swaying objects and, if the value is too large, important objects might not trigger actions.

When the swaying object filter is turned on and the application detects an object, the object does not trigger an action until it has moved a distance larger than the filter size.

Set up the swaying objects filter

The filter ignores any object moving a shorter distance than that from the center to the edge of the ellipse.

Note
  • The filter applies to all objects in the image, not just objects in the same position as the setup ellipse.
  • We recommend that you begin with a small filter size.
  1. Click Scenarios and select an existing scenario or click + to create a new scenario.

  2. Click Triggering objects and make sure Any motion is selected.

  3. Go to Filters > Swaying objects.

  4. Enter how far objects are allowed to move, as a percentage of the screen, before an action triggers.

The small objects filter

The small objects filter reduces false alarms by ignoring objects that are small, for example small animals.

Note
  • The filter applies to all objects in the image, not just objects in the same position as the setup rectangle.
  • The application ignores objects that are smaller than both the entered height and the entered width.

Set up the small objects filter

  1. Click Scenarios and select an existing scenario or click + to create a new scenario.

  2. Click Triggering objects and make sure Any motion is selected.

  3. Go to Filters > Small objects.

  4. Note

    If you have calibrated the perspective, enter the width and height of the objects to ignore in centimeters (inches) instead of as percentage of the image.

  5. Enter the width and height of the objects to ignore as percentage of the image.