Certains contenus de cette application ne sont pas disponibles pour le moment.
Si cette situation persiste, veuillez nous contacter àObservations et contact
1. (WO2018227117) PRÉDICTION DE FAUSSES ALARMES DANS DES SYSTÈMES DE SÉCURITÉ BASÉS SUR DES CAPTEURS
Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

Prediction of False Alarms in Sensor-Based Security Systems

Background

This description relates to operation of sensor networks such as those used for security, intrusion and alarm systems installed on industrial or commercial or residential premises.

It is common for businesses to have various types of systems such as intrusion detection, fire detection and surveillance systems for detecting various alarm conditions at their premises and signaling the conditions to a monitoring station or authorized users. Other systems that are commonly found in businesses are access control systems have card readers and access controllers to control access, e.g., open or unlock doors, etc. These systems use various types of sensors such as motion detectors, cameras, and proximity sensors, thermal, optical, vibration sensors and so forth.

Some of the sensors used in these systems are relatively simple and inexpensive, whereas others in comparison are relatively complex and expensive. Each of these sensor types can be prone to failures of various types. One particular failure type is when a sensor gives a false positive, that is, a false indication of a condition that could result in assert of an alarm condition. Those types of sensor failures can be significant sources of false alarms that can cost alarm monitoring companies, building owners, security professionals and police departments significant amounts of money and wasted time that would otherwise be spent on real intrusion situations.

SUMMARY

According to an aspect, a computer program product tangibly stored on a computer readable hardware storage device includes instructions to cause a processor to collect sensor information from plural sensor devices deployed in a system, with the collected sensor information including sensor data and sensor device metadata, continually analyze the collected sensor information to detect changes in the operational characteristics of a sensor device in the group of sensor devices; upon detection of a change in the operational characteristics of the sensor, access a database that stores maintenance organization contact information, generate based on the detected changes and the access to the database a request for maintenance on the sensor device, and send the request to the maintenance organization contact.

Aspects also include systems and methods.

The aspects can include one or more of the following advantages.

Techniques are provided that predict conditions that are one or more precursors to a false alarm condition caused by an imminent malfunction of security products especially sensors used in security products. The techniques determine when a sensor is likely to fail over a time period or will require maintenance over that time period. By attending to these in a closed-loop notification manner, the techniques may minimize production of spurious data that falsely indicate an alarm condition. For example, a smoke detector may indicate the presence of smoke in the building when it is simply an accumulation of dust on the device. Likewise, a contact switch on a warehouse door may indicate that the door has been opened when, in fact, the magnetic switch has simply stopped working correctly. Such "false alarm" situations caused these numerous and costly incidents of sensor malfunction can result in better and more reliable system performance at lower costs to both building owners, security agency costs, and maintenance personnel. In addition, the purported cause of a potential false alarm condition can be used to identify similar conditions existing at the premises or over a group of related or unrelated premises, all of which can be repaired during a single maintenance visit, and thus avoid other related problems that may exist with the systems that were not initially detected.

The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention is apparent from the description and drawings, and from the claims.

DESCRIPTION OF DRAWINGS

FIG. 1 is a schematic diagram of an exemplary networked security system. FIGS. 2A, 2B are diagrams of a complex sensor device and simple sensor device, respectively.

FIG. 3 is a schematic diagram of an example security system at premises. FIG. 4 is a block diagram of a state based prediction system.

FIG. 5 is a flow diagram of a state representation engine.

FIG. 6 is a flow diagram of state prediction system processing.

FIG. 6A is a flow diagram of training process for a Next state predictor engine that is part of the sensor based state prediction system.

FIG. 6B is a flow diagram of a Next state predictor engine model building process.

FIG. 7 is a flow diagram depicting operational processing of the state predictor.

FIG. 8 is a block diagram of a sensor failure prediction system.

FIGS. 8A-8C are flow diagram of operational processing by the sensor failure prediction system of FIG. 7.

DETAILED DESCRIPTION

Described herein are surveillance/intrusion/fire/access systems that are wirelessly connected to a variety of sensors. In some instances, those systems may be wired to sensors. Examples of detectors/sensors 28 (sensor detectors used interchangeably) include motion detectors, glass break detectors, noxious gas sensors, smoke/fire detectors, contact/proximity switches, video sensors, such as camera, audio sensors such as microphones, directional microphones, temperature sensors such as infrared sensors, vibration sensors, air movement/pressure sensors, chemical/electro-chemical sensors, e.g., VOC (volatile organic compound) detectors. In some instances, those systems sensors may include weight sensors, LIDAR (technology that measures distance by illuminating a target with a laser and analyzing the reflected light), GPS (global positioning system) receivers, optical, biometric sensors, e.g., retina scan sensors, EGG/Heartbeat sensors in wearable computing garments, network hotspots and other network devices, and others.

The surveillance/intrusion/fire/access systems employ wireless sensor networks and wireless devices, with remote, cloud-based server monitoring and report generation. As described in more detail below, the wireless sensor networks wireless links between sensors and servers, with the wireless links usually used for the lowest level connections (e.g., sensor node device to hub/gateway).

In the network, the edge (wirelessly-connected) tier of the network is comprised sensor devices that provide specific sensor functions. These sensor devices have a processor and memory, and may be battery operated and include a wireless network card. The edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture. The parent node may be, e.g., a network access point (not to be confused with an access control device or system) on a gateway or a sub-coordinator which is, in turn is connected to the access point or another sub-coordinator.

Referring now to FIG. 1, an exemplary (global) distributed network topology for a wireless sensor network 10 is shown. In FIG. 1 the wireless sensor network 10 is a distributed network that is logically divided into a set of tiers or hierarchical levels 12a-12c. In an upper tier or hierarchical level 12a of the network are disposed servers and/or virtual servers 14 running a "cloud computing" paradigm that are networked together using well-established networking technology such as Internet protocols or which can be private networks that use none or part of the Internet. Applications that run on those servers 14 communicate using various protocols such as for Web Internet networks XML/SOAP, RESTful web service, and other application layer technologies such as HTTP and ATOM. The distributed network 10 has direct links between devices (nodes) as shown and discussed below.

In one implementation hierarchical level 12a includes a central monitoring station 49 comprised of one or more of the server computers 14 and which includes or receives information from a sensor based state prediction system 50 as will be described below.

The distributed network 10 includes a second logically divided tier or hierarchical level 12b, referred to here as a middle tier that involves gateways 16 located at central, convenient places inside individual buildings and structures. These gateways 16 communicate with servers 14 in the upper tier whether the servers are stand-alone dedicated servers and/or cloud based servers running cloud applications using web programming techniques. The middle tier gateways 16 are also shown with both local area network 17a (e.g. , Ethernet or 802.1 1 ) and cellular network interfaces 17b.

The distributed network topology also includes a lower tier (edge layer) 12c set of devices that involve fully-functional sensor nodes 18 (e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in FIG. 1 are marked in with an "F"), as well as wireless sensor nodes or sensor end-nodes 20

(marked in the FIG. 1 with "C"). In some embodiments wired sensors (not shown) can be included in aspects of the distributed network 10.

In a typical network, the edge (wirelessly-connected) tier of the network is largely comprised of devices with specific functions. These devices have a small-to-moderate amount of processing power and memory, and often are battery powered, thus requiring that they conserve energy by spending much of their time in sleep mode. A typical model is one where the edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture. The parent node may be, e.g., an access point on a gateway or a sub-coordinator which is, in turn, connected to the access point or another sub-coordinator.

Each gateway is equipped with an access point (fully functional sensor node or "F" sensor node) that is physically attached to that access point and that provides a wireless connection point to other nodes in the wireless network. The links

(illustrated by lines not numbered) shown in FIG. 1 represent direct (single-hop MAC layer) connections between devices. A formal networking layer (that functions in each of the three tiers shown in FIG. 1) uses a series of these direct links together with routing devices to send messages (fragmented or non-fragmented) from one device to another over the network.

Referring to FIG. 2A, one type of a sensor device 20 (an example of a complex sensor) is shown. Sensor device 20 includes a processor device 21a, e.g., a CPU and or other type of controller device that executes under an operating system, generally with 8-bit or 16-bit logic, rather than the 32 and 64-bit logic used by high-end computers and microprocessors. The device 20 has a relatively small flash/persistent store 21b and volatile memory 21 c in comparison with other the computing devices on the network. Generally, the persistent store 21b is about a megabyte of storage or less and volatile memory 21c is about several kilobytes of RAM memory or less. The device 20 has a network interface card 21 d that interfaces the device 20 to the network 10. Typically, a wireless interface card is used, but in some instances a wired interface could be used. Alternatively, a transceiver chip driven by a wireless network protocol stack (e.g., 802.15.4/6LoWPAN) can be used as the (wireless) network interface. These components are coupled together via a bus structure. The device 20 also includes a sensor element 22 and a sensor interface 22a that interfaces to the processor 21 a. Sensor 22 can be any type of sensor types mentioned above.

Referring to FIG. 2B, another type of a sensor device 20' is a simple sensor element that is hardwired to a panel or another device that conveys sensor signals to a panel or system. One example of such this type of sensor is a magnetic, door/window sensor switch, that simply has for instance a Reed switch and magnet. Normally, the Reed switch in an open state and closes to form a circuit when the Reed switch comes within a certain proximity to the magnet.

In any event, complex 20 and/or simple sensors 20' are deployed throughout a premises and at any time are subj ect to failure or erratic behavior.

Referring now to FIG. 3, an example application 30 of a security system in particular an intrusion detection system 32 and access control system 34 installed at a premises 36 is shown. In this example, the premises 36 is a commercial premises, but the premises may alternatively be any type of premises or building, e.g., industrial, etc. The intrusion detection system 32 includes a panel, such as an intrusion detection panel 38 and sensors/detectors 20, 20' (FIGS. 1 , 2A, 2B) disbursed throughout the premises 36. The intrusion detection system 32 is in communication with a central monitoring station 49 (also referred to as central monitoring center) via one or more data or communication networks 52, such as the Internet (or the phone system or cellular communication system being examples of others). The intrusion detection panel 38 receives signals from the plural detectors/sensors 20, which send to the intrusion detection panel 38 information about the status of the monitored premises.

Sensor/detectors may be hardwired or communicate with the intrusion detection panel 38 wirelessly. Some or all of sensor/detectors 20 communicate wireless with the intrusion detection panel 18 and with the gateways. In general, detectors sense glass breakage, motion, gas leaks, fire, and/or breach of an entry point, and send the sensed information to the intrusion detection panel 38. Based on the information received from the detectors 20, the intrusion detection panel 38 determines whether to trigger alarms, e.g., by triggering one or more sirens (not shown) at the premises 36 and/or sending alarm messages to the monitoring station 20. A user may access the intrusion detection panel 38 to control the intrusion detection system, e.g., disarm, arm, enter predetermined settings, etc.

Also shown in FIG. 3 is a dispatch center 29 that in this example is part of the central monitoring station 49. The dispatch center 29 includes personnel stations (not shown), server(s) systems 14 running a program that populates a database (not shown) with historical data. The central monitoring station 49 also includes the sensor based state prediction system 50. An exemplary intrusion detection panel 38 includes a processor and memory, storage, a key pad and a network interface card (NIC) coupled via a bus (all not shown).

Referring now to FIG. 4, a prediction system 50 is shown. The prediction system 50 executes on one or more of the cloud-based server computers and accesses database(s) 51 that store sensor data and sensor state data in a state transition matrix, with the sensor data stored and accessible on an individually identifiable sensor device 20 or 20'. In some implementations, dedicated server computers could be used as an alternative. Aspects of the prediction system are disclosed in US Published Application US-2017-0092108-A1, published March 230, 2017, the entire contents of which are incorporated herein by reference.

The sensor based state prediction system 50 disclosed herein is modified to consider state transitions with respect to individually identifiable sensor devices 20 or 20' . The sensor based state prediction system 50 includes a State Representation Engine 52. The State Representation Engine 52 executes on one or more of the servers described above and interfaces on the servers receive sensor signals from a large plurality of sensors deployed in various premises throughout an area. These sensor signals have sensor values that represent a data instance for the particular sensors over time.

While in the above mentioned published application sensor data and the other monitoring data are collected together to represent a data instance for a particular area of a particular premises in a single point in time, and the State Representation Engine in the published application takes these granular values and converts the values into a semantic representation. In this engine 52 sets of sensor values for each particular sensor at points in time are collected assigned labels, e.g., "State- 1 , sensor XX, YY"; "State-n, sensor XX, YY"; etc. As the data is collected continuously, this Engine 52 works in an unsupervised manner, as discussed below to determine various safe and drift states with respect to the sensors that may exist in the premises.

As different states are captured, this engine 52 also determines state transition metrics that are stored in the form a state transition matrix. A simple state transition matrix has all states for all sensors in its rows and columns capturing the operating behavior of the sensors for a given system. State transitions occur either over time or due to events, such as sensor failures. Hence, the state transition metrics are captured using both time and events. A state is a representation of a group of one sensor or a group of more than one sensor grouped according to a clustering algorithm.

The State transition matrix is a data structure that stores how many times the sensors changed from State_i, e.g., a normal state to StateJ, e.g., a failure state. The State transition matrix thus stores "knowledge" that the sensor based state prediction system 50 captures and which is used to determine predictions of the behavior of the sensor. The State transition matrix is accessed by the Next prediction engine to make decisions and trigger actions by the sensor based state prediction system 50.

Unsupervised learning e.g., clustering is used to group sensor readings into states and conditions over a period of time that form a time trigger state and over events to form an event trigger state. Used to populate the state transition matrix per sensor.

An exemplary simplified depiction for explanatory purposes of a State transition matrix is set out below:


where columns in the State transition matrix are "state transitions" expressed as a listing by instance with pointer to the state time and event trigger tables. Entries x,y in cells of the State transition matrix are pointers that corresponds to trigger tables that store the number of time periods and events respectively for each particular cell of the State transition matrix.

The State time trigger is depicted below. The State time trigger tracks the time periods tl ... t8 for each state transition corresponding to the number x in each particular cell.

tl t2 t3 ***

Instance State State State

*** transition 1 transition 2 transition 3

1 1 1 ***

1 1 1 *** tl t5 t2 t3 t4 17 t8 ***

State event trigger tracks the events El ... E2 for each state transition corresponding to the number y in each particular cell (if any).


The State Representation Engine 52 in addition to populating the State transition matrix, also populates a State time trigger that is a data structure to store, the time value spent in each state and a distribution of the time duration for each state. Similar to the State transition matrix, the State time trigger also encapsulates the behavior knowledge of the environment. State transitions can be triggered using these values.

The State Representation Engine 52 also populates a State event trigger. The State event trigger is a data structure to store, event information. An example of an event can be sensor on a door sensing that a door was opened. There are many other types of events. This data structure captures how many times such captured events caused a state transition.

The State Representation Engine 52 populates the State Transition matrix and the State Time and State triggers, which together capture metrics, which provide a Knowledge Layer of the operational characteristics of the sensor.

The sensor based state prediction system 50 also includes a Next State Prediction Engine 54. The Next State Prediction Engine 54 predicts an immediate Next state of the sensor device, based on the state transition matrix. The Next State Prediction Engine 54 predicts if the sensor device will be in either a normal or a drift or a fail state over a time period in the future. The term "future" as used herein refers to a defined window of time in the future, which is defined so that a response team has sufficient time to address a condition that is predicted by the Next State Prediction Engine 54 that may occur in the sensor to restore the state of the sensor to a normal state. The Next State Prediction Engine operates as a Decision Layer in the sensor.

The sensor based state prediction system 50 also includes a State

Representation graphical user interface generator 56. State Representation graphical user interface generator 56 provides a graphical user interface that is used by the response team to continuously monitor the state of the sensor. The State

Representation graphical user interface generator 56 receives data from the Next State Prediction Engine 54 to graphically display whether the sensor is either in the safe state or the drifting state. The State Representation graphical user interface generator 56 operates as an Action Layer, where an action is performed based on input from Knowledge and Decision Layers.

The sensor based state prediction system 50 applies unsupervised algorithm learning models to analyze historical and current sensor device 20, 20' data records from one or more customer premises and generates a model that can predict Next patterns, anomalies, conditions and events over a time frame that can be expected for sensor devices 20, 20' at a customer site. The sensor based state prediction system 50 produces a list of one or more predictions that may result in on or more alerts being sent to one more user devices as well as other computing system, as will be described. The prediction system 50 uses various types of unsupervised machine learning models including Linear/Non-Linear Models, Ensemble methods etc.

The prediction system 50 is one example of a prediction system 51 that is used in a sensor failure prediction system 120 (FIG. 7) for detecting system behavior that is a potential precursor to a false alarm condition. Other examples of prediction systems could be used. Using the sensor failure prediction system 120, building owners can be notified in advance that their intrusion detection system (or fire, surveillance, access system or integrated systems comprising one or more of these systems) is

acting in a manner that suggests certain preventative maintenance is required. In particular, the sensor failure prediction system 120 is configured to predict and provide a mechanism to minimize occurrences of false alarm conditions.

Referring now to FIG. 5, the processing 60 for the State Representation Engine 52 is shown. The State Representation Engine 55 collects 62 (e.g., from the databases 51 or directly from interfaces on the servers) received sensor signals from a large plurality of sensors deployed in various sensor throughout an area that is being monitored by the sensor based state prediction system 50. The sensor data collected from the sensor device 20, 20', includes collected sensor monitoring data values and metadata regarding the sensor, e.g., type, model, part no., age, etc.

Sensor signals provided from sensor devices have sensor values that represent a data instance for a particular area of a particular premises in a single point in time. The State Representation Engine 52 converts 64 this sensor data into semantic representations of the state of the premises and more particularly the state of the sensor devices 20, 20' at instances in time. The State Representation Engine 52 uses 66 the converted sensor semantic representation of the sensor data collected from the sensors to determine the empirical characteristics of the sensors. The State

Representation Engine 52 assigns 67 an identifier to the state of the sensors. Any labelling can be used. Labelling is typically consecutively identified, such that a state for a magnetic door sensor is semantically described as follows:

State 1 : interior door 16, magnetic door sensor: open current time:

<date> <time>

The semantic description includes the identifier "State 1" as well as semantic descriptions of the sensor (a magnetic door sensor on interior door number 16,), the values "open" and date <date> and time <time>.

The State Representation Engine 52 determines an abstraction of a collection of "events" i.e., the sensor signals as state. The state thus is a concise representation of the underlying behavior information of the sensors being monitored, described by time and data and various sensor values at that point in time and at that date.

The semantic representation of the state is stored 68 by the State

Representation Engine 52 as state transition metrics in the State Representation matrix. Over time and days, as the sensors produce different sensor values, the State Representation Engine 55 determines different states and converts these states into semantic representations that are stored the state transition metrics in the matrix, e.g., as in a continuous loop 70.

The state representation engine 52, converts raw values into state definitions and assigns (labels) each with a unique identifier for each state, as discussed above. As the sensor devices 20, 20' are operated over a period of time, the Next transition matrix, the state time trigger matrix and the state event trigger matrix are filled.

The state representation engine 52, adds to the state transition matrix entries that corresponds to transitions, in which as sensor moved from state 1 to state 2. The state representation engine 52, also adds to the state transition matrix in that entry, an indicator that the transition was "time trigger," causing the movement, and thus the state representation engine 52 adds an entry in state time trigger matrix. The state representation engine 52, thus co-ordinates various activities inside the premises under monitoring and captures/determines various operating characteristics of sensors in the premises.

Referring now to FIG. 6 processing 80 for the Next State Prediction Engine 54 is shown. This processing 80 includes training processing 80a (FIG. 6A) and model building processing 80b (FIG. 7B), which are used in operation of the sensor based state prediction system 50.

Referring now to FIG. 6A, the training processing 80a that is part of the processing 80 for the Next State Prediction Engine 54 is shown. In FIG. 6A, training processing 80' trains the Next State Prediction Engine 54. The Next State Prediction Engine 54 accesses 82 the state transition matrix and retrieves a set of states from the state transition matrix. From the retrieved set of states, the Next State Prediction Engine 54 generates 84 a list of most probable state transitions for a given time period, the time period can be measured in minutes, hours, days, weeks, months, etc. For example, consider the time period as a day. After a certain time period of active usage, the sensor based state prediction system 50, through the state representation engine 52, has acquired knowledge states si to s5.

From the state transition matrix, the system uses the so called "Markov property" to generate state transitions. As known, the phrase "Markov property" is used in probability and statistics and refers to the "memoryless" property of a stochastic process.

From the state transition matrix using the so called "Markov property" the system generates state transition sequences, as the most probable state sequences for a given day.

An exemplary fictitious sequence is shown below:

si s2 s4 s5

s2 s2 s4 s5

The Next State Prediction Engine 54 determines 86 if a current sequence is different than an observed sequence in the list above. When there is a difference, the Next State Prediction Engine 54 determines 88 whether something unusual has happened in sensor devices 20, 20' being monitored or whether the state sequence is a normal condition of the sensor devices 20, 20' being monitored.

With this information the Next State Prediction Engine 54 90 these state transitions as "safe" or "drift/fail state" transitions. Either the Next State Prediction Engine 54 or manual intervention is used to label either at the state transition level or the underlying sensor value levels those state transitions. A first column can be the sensor ID, a second column (or sets of columns a value or sets of values and a last column is the label. For example, "G" can be used to indicate green, e.g., a normal operating state, "Y" is used to indicate yellow, e.g., an abnormal or drift state, and "R" a fail state. This data and states can be stored in the database 51 and serves as training data for a machine learning model that is part of the Next State

Recommendation Engine 54.

Referring now to FIG. 6B, the model building processing 80b of the Next State Recommendation Engine 54 is shown. The model building processing 80b uses the above training data to build a model that classify a system's state into either a safe state or an unsafe state. Other states can be classified. For example, three states can be defined, as above, "G Y R states" or green (safe state) yellow (drifting state) and red (unsafe state). For ease of explanation two states "safe" (also referred to as normal) and "unsafe" (also referred to as drift) are used. The model building processing 80b accesses 102 the training data and applies 104 one or more machine learning algorithms to the training data to produce the model that will execute in the Next State Recommendation Engine 54 during monitoring of systems. Machine learning algorithms such as Linear models and Non-Linear Models, Decision tree learning, etc., which are supplemented with Ensemble methods (where two or more

models votes are tabulated to form a prediction) and so forth can be used. From this training data and the algorithms, the model is constructed 106.

Referring now to FIG. 7, operational processing 100 of the sensor based state prediction system 50 is shown. The sensor based prediction system 50 receives 102 (by the State Representation Engine 52) sensor signals from a large plurality of sensors deployed in various sensor devices 20, 20' throughout an area being monitored. The State Representation Engine 52 converts 104 the sensor values from these sensor signals into a semantic representation that is identified, as discussed above. As the data is collected continuously, this Engine 52 works in an

unsupervised manner to determine various states that may exist in sensor data being received from the individual sensor devices 20, 20' installed in a given premises or group of premises. As the different states are captured, the State Representation Engine 52 also determines 106 state transition metrics that are stored in the state transition matrix using both time and events populating the State time trigger and the State event trigger, as discussed above. The State transition matrix is accessed by the Next prediction engine 54 to make decisions and trigger actions by the sensor based state prediction system 50.

The Next State Prediction Engine 54 receives the various states (either from the database and/or from the State Representation Engine 52 and forms 108 predictions of an immediate Next state of the sensor devices 20, 20' /systems based the state data stored in the state transition matrix. For such states the Next State Prediction Engine 54 predicts if the sensor devices 20, 20' will be in either a safe state or a drift state over a time period in the Next as discussed above.

The sensor based state prediction system 50 also sends 110 the predictions to the State Representation engine 56 that generates a graphical user interface to provide a graphical user interface representation of predictions and states of various sensor devices 20, 207systems. The state is tagged 112 and stored 114 in the state transition matrix.

The sensor based state prediction system 50 using the State Representation Engine 52 that operates in a continuous loop to generate new states and the Next State Prediction Engine 54 that produces predictions together continually monitor the sensor devices 20, 207systems looking for transition instances that result in drift in states that indicate potential problem conditions. As the sensors in the premises

being monitored operate over a period of time, the state transition matrix, the state time trigger matrix and the state event trigger matrix are filled by the state representation engine 52 and the Next State Prediction Engine 54 processing 80 improves on predictions.

The sensor based state prediction system 50 thus determines the overall state of the sensor devices 20, 20' and the systems by classifying into a normal or "safe" state and the drift or unsafe state. Over a period of time, the sensor based state prediction system 50 collects information about the sensor devices 20, 20' and the sensor based state prediction system 50 uses this information to construct a mathematical model that includes a state representation, state transitions and state triggers. The state triggers can be time based triggers and event based triggers, as shown in the data structures above.

Describes are techniques for detecting changes in operational characteristics of sensor devices by collecting sensor information from deployed sensor devices. The collected sensor information includes sensor data and sensor device metadata. The sensor device metadata includes data about the sensor, such as a sensor identification value that can be used to retrieve other sensor metadata such as the age of the sensor, as well as sensor type, manufacturer, model, etc. These techniques continually analyze the collected sensor information to detect changes in the operational characteristics of a sensor device in the group of sensor devices. Upon detection of changes in the operational characteristics of the sensor, the techniques access a database that stores maintenance organization contact information to send a request for maintenance on the sensor device to the maintenance organization contact. In some implementations the age, type, model, etc. of the sensor with the detected changes is used to find other similar devices that could be replaced during a maintenance appointment scheduled by the request.

Referring now to FIG. 8, the sensor failure prediction system 120 includes a prediction system 122 that uses data intelligence to predict anomalous behavior in sensor devices. While aspects of the processing in the sensor failure prediction system 120 can be expanded to entire systems, the focus herein will be primarily on sensor devices. The prediction system 122 tracks normal product behavior that is derived from past product behavior against current product behavior at a protected premises. One example of the prediction system 122 is the sensor based state

prediction system 50 of FIGS. 4-7B, discussed above. Other the prediction systems can be used. The past behavior sets a standard by which the security product, e.g., sensor device is measured against.

The sensor failure prediction system 120 also includes a prediction analysis system 124 that receives messages regarding one or more determined predictions of a sensor device failure from the prediction system 122. When a deviation from a standard norm is detected in the security product's, e.g., sensor device's operation, that resulted in a prediction message, the sensor failure prediction system 120 analyzes the type of deviation that occurred, determines whether the behavior is of a type that is a precursor to a false alarm condition, and indicates the response that is necessary.

The sensor failure prediction system 120 also includes or accesses a data repository 126 that stores information regarding installed sensor devices in premises. In particular, the stored information includes information such as

installation/manufacture date, etc. The prediction analysis 124 access information regarding the sensor device or devices involved in the prediction, determines potential effects on assertion of false alarm conditions and in some instances predicts other related aspects of the security product that may need attention.

For example, if a magnetic door sensor is exhibiting aberrant behavior and is predicted to fail, the sensor failure prediction system 120 accesses the data repository to determine the age of that sensor. The sensor failure prediction system 120 also accesses the data repository to determine the existence of as well as location of all other similar sensors in the building that are the same type and age as the failing one.

The sensor failure prediction system 120 includes a notification system 128 that contacts an administrative function at the premises regarding the prediction. In addition, the system can be "closed-loop" by accessing a maintenance service database 130 for automatically sending all of this information to a maintenance organization and scheduling time for the maintenance to be conducted.

Using such a closed-loop approach, a building owner is notified that there was a problem with their security product, and the maintenance organization has been notified and scheduling time for the maintenance to be conducted has been set, so that the failed sensor device as well as like sensor devices will be automatically serviced and/replaced. Therefore, when a security issue arises, the building owner will have a greater confidence that it is an actual security event rather than a false alarm.

Referring now to FIG. 9, processing 150 by the system 120 for detecting equipment, more specifically, sensor equipment failure, and evaluating such a failure with respect to maintenance is shown. The prediction system 122 processes 122a the sensor data and determines 122b whether sensors are operating within normal operating ranges or whether sensors are drifting out of the operating ranges and are in a condition of imminent failure or actual failure. The prediction system 122 upon detection of one or more sensors drifting out of the operating ranges, being in a condition of imminent failure or actual failure, sends 122c details of the sensor to the prediction analysis system 124. The prediction analysis system 124 analyzes 124a the details provided from the prediction system 122. The prediction analysis system 124 analyzes 124b the type of deviation that occurred, determines 124c whether the behavior is of a type that is a precursor to a false alarm condition, and indicates 124d the response that is necessary.

The prediction analysis system 124 predicts 124e other related aspects of the product that may need attention. The prediction analysis system 124 accesses 124f the complete history of the sensor and determines the age of that sensor by accessing the database 126 to obtain records of all sensors in the premises that are the same type, same manufacture/model, and similar age as the failing one. Age can be referenced either from data of manufacture or date of installation or date of purchase, etc. All that is necessary is that whatever definition of "age" is used, is consistently applied by the analysis system 124. The prediction analysis system 124 determines 124g a message to send to the notification system 128. Similar age can be a range that is pre-established based on the device type. Thus, similar age would be in a range that is fixed percentage of the age of the device in the drift state. On one end of the range, all devices that are older than the device in the drift state would be considered candidates for replacement and on the other end all devices within 80% or 90% of the age of the device in the drift state would be considered candidates for replacement. Other values could be used, the basis for exact values will vary from device type to device type taking into consideration other factors such as cost of the device, cost of replacement and cost of a service call, etc.

The notification system 128 accesses 128a the database 130 and retrieves 128b contact information for the maintenance organization associated with the system or the failing sensor, depending on how the service is configured beforehand. The notification system 128 notifies 128c the building owner of all this information and in addition, the notification system 128 automatically sends 128d all of this information to the maintenance organization identified and schedules time for the maintenance to be conducted. A message is sent 128e from notification system 128 to the maintenance organization to select a time/date for a service call, that selection is executed by maintenance organization systems and those systems return a message with the selected time/date. This message is received 128e by the notification system 128 and processed 128f to obtain agreement 128g with the notification system 128 for time/date generally upon approval (either automatically or more likely upon approval of the building owner). An approval message is then sent 128g as a confirmation by the notification system 128 to the maintenance organization's systems.

Using such a closed loop system, a building owner need not be directly involved with the problem with their security product as the problem would be automatically notified to a maintenance organization that would fix the problem as well as related problems. Therefore, when a security alert is issued, the building owner will have a greater confidence that the alert is an actual security event rather than a false alarm.

The prediction analysis 124 and the notification system 128 can be part of the prediction system 122 that conducts an analysis of sensor signals from deployed individual sensors to detect one or more drift states that predict potential sensor equipment failure.

Various combinations of the above described processes are used to implement the features described.

Servers interface to the sensor based state prediction system 50 via a cloud computing configuration and parts of some networks can be run as sub-nets. In some embodiments, the sensors provide in addition to sensor data, detailed additional information that can be used in processing of sensor data evaluate. For example, a motion detector could be configured to analyze the heat signature of a warm body moving in a room to determine if the body is that of a human or a pet. Results of that analysis would be a message or data that conveys information about the body

detected. Various sensors thus are used to sense sound, motion, vibration, pressure, heat, images, and so forth, in an appropriate combination to detect a true or verified alarm condition at the intrusion detection panel.

Recognition software can be used to discriminate between objects that are a human and objects that are an animal; further facial recognition software can be built into video cameras and used to verify that the perimeter intrusion was the result of a recognized, authorized individual. Such video cameras would comprise a processor and memory and the recognition software to process inputs (captured images) by the camera and produce the metadata to convey information regarding recognition or lack of recognition of an individual captured by the video camera. The processing could also alternatively or in addition include information regarding characteristic of the individual in the area captured/monitored by the video camera. Thus, depending on the circumstances, the information would be either metadata received from enhanced motion detectors and video cameras that performed enhanced analysis on inputs to the sensor that gives characteristics of the perimeter intrusion or a metadata resulting from very complex processing that seeks to establish recognition of the object.

Sensor devices can integrate multiple sensors to generate more complex outputs so that the intrusion detection panel can utilize its processing capabilities to execute algorithms that analyze the environment by building virtual images or signatures of the environment to make an intelligent decision about the validity of a breach.

Memory stores program instructions and data used by the processor of the intrusion detection panel. The memory may be a suitable combination of random access memory and read-only memory, and may host suitable program instructions (e.g. firmware or operating software), and configuration and operating data and may be organized as a file system or otherwise. The stored program instruction may include one or more authentication processes for authenticating one or more users. The program instructions stored in the memory of the panel may further store software components allowing network communications and establishment of connections to the data network. The software components may, for example, include an internet protocol (IP) stack, as well as driver components for the various interfaces. Other software components suitable for establishing a connection and communicating across network will be apparent to those of ordinary skill.

Program instructions stored in the memory, along with configuration data may control overall operation of the system. Servers include one or more processing devices (e.g., microprocessors), a network interface and a memory (all not illustrated). Servers may physically take the form of a rack mounted card and may be in communication with one or more operator terminals (not shown). An example monitoring server is a SURGARD™ SG-System III Virtual, or similar system.

The processor of each monitoring server acts as a controller for each monitoring server, and is in communication with, and controls overall operation, of each server. The processor may include, or be in communication with, the memory that stores processor executable instructions controlling the overall operation of the monitoring server. Suitable software enable each monitoring server to receive alarms and cause appropriate actions to occur. Software may include a suitable Internet protocol (IP) stack and applications/clients.

Each monitoring server of the central monitoring station may be associated with an IP address and port(s) by which it communicates with the control panels and/or the user devices to handle alarm events, etc. The monitoring server address may be static, and thus always identify a particular one of monitoring server to the intrusion detection panels. Alternatively, dynamic addresses could be used, and associated with static domain names, resolved through a domain name service.

The network interface card interfaces with the network to receive incoming signals, and may for example take the form of an Ethernet network interface card (NIC). The servers may be computers, thin-clients, or the like, to which received data representative of an alarm event is passed for handling by human operators. The monitoring station may further include, or have access to, a subscriber database that includes a database under control of a database engine. The database may contain entries corresponding to the various subscriber devices/processes to panels like the panel that are serviced by the monitoring station.

All or part of the processes described herein and their various modifications (hereinafter referred to as "the processes") can be implemented, at least in part, via a computer program product, i.e., a computer program tangibly embodied in one or more tangible, physical hardware storage devices that are computer and/or machine-readable storage devices for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple

computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.

Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).

Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.

Tangible, physical hardware storage devices that are suitable for embodying computer program instructions and data include all forms of non-volatile storage, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks and volatile computer memory, e.g., RAM such as static and dynamic RAM, as well as erasable memory, e.g., flash memory.

In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other actions may be provided, or actions may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems.

Likewise, actions depicted in the figures may be performed by different entities or consolidated.

Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the processes, computer programs, Web pages, etc. described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.

Other implementations not specifically described herein are also within the scope of the following claims.