Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020115066 - METHOD FOR ONLINE TRAINING OF AN ARTIFICIAL INTELLIGENCE (AI) SENSOR SYSTEM

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

Claims

1. A method of operating an artificial intelligence sensor system (10) for supervised training purposes, the artificial intelligence sensor system (10) having one or more sensors (12, 14) and at least one classifier or artificial neural network (18) that is configured for receiving and processing signals (cL1, cL2< ¾i) from the sensor or the sensors (12, 14), wherein the method comprises at least the following steps that are to be executed iteratively:

- providing (50, 52) signals (¾,¾- ¾) fro™ the sensor or the sensors (12, 14) as input data to the at least one classifier or artificial neural network (18),

- operating (54, 56) the at least one classifier or artificial neural network (18) to derive an output representing a quality, which encompasses abstract objects such as classes as used for classification purposes as well as properties of objects, with a confidence level regarding the provided signals (¾i,¾2, ¾i),

- if the derived confidence level of the quality is equal to or larger than a predetermined confidence level, permanently storing (58) at least a portion of the provided signals (cL1, cL2< ¾i) and the derived quality as labeled online training data, using the derived quality as the label,

- if the derived confidence level of the quality is lower than the predetermined confidence level, temporarily storing (60) the at least one provided signal (xA 1) and the derived quality,

- confirming (62, 66) the quality having a derived confidence level lower than the predetermined confidence level by use of at least one independent sensor signal (xA2,¾i), including using a signal (xB 1) of another sensor (14) from which the at least one classifier or artificial neural network (18) derives an output representing a quality with a confidence level that is equal to or larger than the predetermined confidence level, and

- after completion of the step of confirming (62, 66), permanently storing (70) at least a portion of the temporarily stored signal (cL1) or signals and the derived quality as labeled online training data, using the derived quality as the label.

2. The method as claimed in claim 1 , further comprising a preceding step of providing the at least one classifier or artificial neural network (18) in an offline mode with initial permanently stored training results.

3. The method as claimed in claim 1 or 2, further comprising a step of executing an online supervised training phase at least once in a predetermined time period by using at least the permanently stored labeled online training data.

4. The method as claimed in claim 3, wherein the at least one artificial neural network (18) includes a plurality of layers (24) between an input layer (20) and an output layer (22), and the step of executing an online supervised training phase includes training only of preselected layers out of the plurality of layers (20, 22, 24).

5. The method as claimed in any one of the preceding claims, further comprising a step of performance control by validating the current online training status, using a validation sensor data set with assigned correct labels that resides within the classifier or artificial neural network (18).

6. The method as claimed in claim 5, wherein the step of performance control comprises steps of

- operating the at least one classifier or artificial neural network (18) to derive outputs representing qualities with confidence levels for the provided validation sensor data set with assigned correct labels,

- comparing the derived outputs with the assigned correct labels,

- calculating a current performance figure that represents the result of the step of comparing,

- comparing the current performance figure with a predetermined performance figure,

- accepting adaptations made in the at least one classifier or artificial neural network (18) since executing the latest online supervised training phase if the current performance figure is equal to or exceeds the predetermined performance figure, and

- rejecting adaptations made in the at least one classifier or artificial neural network since executing the latest online supervised training phase if the current performance figure is lower than the predetermined performance figure.

7. An artificial intelligence sensor system (10), including

- at least one classifier or artificial neural network (18) having an input side (28, 36) and an output side (30, 38),

- at least one sensor system (12, 14), operatively connected to the input side (28, 36) of the classifier or artificial neural network (18), and each sensor system (12, 14) comprising at least one sensor,

wherein the classifier or artificial neural network (18) is configured to provide, at the output side (30, 38), an output representing a quality, which encompasses abstract objects such as classes as used for classification purposes as well as properties of objects, with a confidence level with regard to at least one object (48) to be monitored or surveilled by applying at least one trained task to signals (¾i, ¾2-¾i) that have been received from the at least one sensor system (12, 14), and

wherein the at least one task is trained by executing a method as claimed in any one of claims 1 to 6.

8. The artificial intelligence sensor system (10) as claimed in claim 7, wherein the at least one sensor system (12, 14) comprises at least one out of an optical camera (12), a RADAR sensor system (14), a LIDAR device and an acoustics based sensor device.

9. The artificial intelligence sensor system (10) as claimed in claim 7 or 8, wherein the at least one artificial neural network (18) comprises at least one deep neural network (26, 34).

10. Use of the artificial intelligence sensor system (10) as claimed in claim 7 to 9 in an automotive vehicle exterior sensing system.

11. Use of the artificial intelligence sensor system (10) as claimed in claim 7 to 9, comprising at least one out of an optical camera (12) and a RADAR sensor system (14), as an automotive vehicle interior sensing system.