Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020115066 - METHOD FOR ONLINE TRAINING OF AN ARTIFICIAL INTELLIGENCE (AI) SENSOR SYSTEM

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

Method for Online Training of an

Artificial Intelligence (Al) Sensor System

Technical field

[0001] The invention relates to a method of operating an artificial intelligence sensor system for supervised training purposes and an artificial intelligence sensor system configured for executing such method.

Background of the Invention

[0002] Systems based on sensor input experience fast-growing demands in various fields. For instance, in the automotive field they constitute the backbone of almost all Advanced Driver-Assistance Systems (ADAS) as these monitor an exterior environment or the interior of a vehicle and its occupants for providing improved safety by facilitating an optimized reaction of a driver of a vehicle with appropriate warnings or even by automatically taking over control of the vehicle, for instance in collision avoidance systems.

[0003] In this function, such systems are requested to perform tasks of increasing complexity. For example, they should be capable to anticipate potential risks that might occur in complex traffic scenarios within the next few seconds. In conventional ADAS, usually an electronic processing unit such as a central processing unit (CPU) is employed for executing a program code of a software module that has been manually designed for controlling an automatic execution of a monitoring method.

[0004] By way of example, patent application publication US 2014/0139670 A1 describes a system and method directed to augmenting advanced driver assistance systems (ADAS) features of a vehicle with image processing support in an on-board vehicle platform. Images may be received from one or more image sensors associated with an ADAS of a vehicle. The received images may be processed. An action is determined based upon, at least in part, the processed images. A message is transmitted to an ADAS controller responsive to the determination. To that end, the vehicle may include one or more processor units, networking interfaces, and other computing devices that may enable it to capture image data, process the image data, and augment ADAS features of the vehicle

with image processing support in the on-board vehicle platform. A computing system may include single-feature fixed-function devices such as an ADAS image system on chip (SoC).

[0005] The complexity of tasks to be performed by such ADAS tends to grow more and more as well as in other technical fields such as, for instance, medical diagnostic appliances, smartphone technology and drone technology.

[0006] In such complex sensor-based systems it has been proposed to exploit the capabilities of artificial intelligence (Al) systems and artificial neural networks, respectively. In contrast to conventional processing units, artificial neural networks provide the possibility of learning.

[0007] Artificial neural networks are known to comprise a plurality of interconnected artificial neurons and to have an input side and an output side. As is well known in the field of artificial neural networks, each artificial neuron of the plurality of interconnected artificial neurons (also called nodes) can transmit a signal to another artificial neuron connected to it, and the received signal can further be processed and transmitted to the next artificial neuron. The output of each artificial neuron may be calculated using a non-linear function of the sum of its inputs. In a learning process of an artificial neural network, weights of the non linear function usually are being adjusted. A complex task may be learned by determining a set of weights for the artificial neurons such that the output signal of the artificial neural network is close to a desired output signal, which is performed when the artificial neural network is trained.

[0008] Multiple methods for training an artificial neural network are known in the art. For instance, in supervised learning a function is learned that maps an input to an output based on exemplary input-output pairs. An artificial neural network that has been submitted to a learning scheme is often called a“trained” artificial neural network.

[0009] Reliability and performance of Al systems including an artificial neural network improve with quantity and quality of training data. Typical Al systems using neural network require a vast amount of data. Therefore, acquisition of training data constitutes a major challenge in the creation of such systems.

[0010] As a solution, US 2018/0253645 A1 proposes a method of triage of training data for acceleration of large-scale machine learning. Training input from a set of training data is provided to an artificial neural network. The artificial neural network comprises a plurality of output neurons. Each output neuron corresponds to a class. From the artificial neural network, output values are determined at each of the plurality of output neurons. From the output values, a classification of the training input by the artificial neural network is determined. A confidence value of the classification is determined. Based on the confidence value, a probability of inclusion of the training input in subsequent training is determined. A subset of the set of training data is determined based on the probability. The artificial neural network is trained based on the subset.

[0011] Further, US 9,760,827 B1 describes systems and methods for applying neural networks in resource-constrained environments. A system may include a sensor located in a resource-constrained environment configured to generate sensor data of the resource-constrained environment. The system may also include a first computing device not located in the resource-constrained environment configured to produce a neural network structure based on the sensor data. The system may further include a second computing device located in the resource-constrained environment configured to provide the sensor data as input to the neural network structure. The second computing device may be further configured to determine a state of the resource-constrained environment based on the input of the sensor data to the neural network structure.

Object of the invention

[0012] It is therefore an object of the invention to provide a method for training of a single or multiple sensor based artificial intelligence (Al) system including at least one artificial neural network, wherein the training method is able to ensure a specified reliability and a specified performance of the Al system and that is efficient in terms of computational and digital data memory hardware effort and cost.

General Description of the Invention

[0013] For the purpose of training a single or multiple sensor based Al system including at least one artificial neural network it is virtually impossible to produce a data set that covers what the system will have to process in the field.

[0014] Within the scope of the invention, it is therefore proposed to (re-)train Al systems in an online manner during their lifetime so as to improve the Al system performance and reliability with time.

[0015] The invention addresses and overcomes at least the following obstacles of online training:

Supervised training, for which labeled training data are needed, is considered the most efficient training method at present. Hence, the question raises on how to automatically label data received during the Al system lifetime.

The outcome of an online training method is in principle not predictable. Therefore, there has to be a control mechanism for ensuring a specified reliability and a specified performance of the Al system.

(Online) Training of an Al system including an artificial neural network requires high computational effort and expenses, i.e. it requires expensive devices exhibiting high computational capabilities.

[0016] In one aspect of the present invention, the object is achieved by a method of operating an artificial intelligence sensor system for supervised training purposes. The artificial intelligence sensor system includes one or more sensors and at least one classifier or artificial neural network that is configured for receiving and processing signals from the sensor or the sensors. It should be noted that the present invention is not limited to Al or machine learning using artificial neural networks. The skilled person will recognize that the invention also relates to other classifier techniques using an algorithm that gives a reliable classification/decision, e.g. with a high confidence level.

[0017] The method comprises at least the following steps that are to be executed iteratively:

providing signals from the sensor or the sensors as input data to the at least one classifier or artificial neural network,

operating the at least one classifier or artificial neural network to derive, e.g. based on labeled training data resident within the at least one artificial neural network, an output representing a quality with a confidence level regarding the provided signals,

if the derived confidence level of the quality is equal to or larger than a predetermined confidence level, permanently storing at least a portion of the provided signal and the derived quality as labeled online training data, using the derived quality as the label,

if the derived confidence level of the quality is lower than the predetermined confidence level, temporarily storing the at least one provided signal and the derived quality,

confirming the quality having a derived confidence level lower than the predetermined confidence level by use of at least one independent sensor signal, including using a signal of another sensor from which the at least one classifier or artificial neural network derives an output representing a quality with a confidence level that is equal to or larger than the predetermined confidence level, and

after completion of the step of confirming, permanently storing at least a portion of the temporarily stored signal or signals and the derived quality as labeled online training data, using the derived quality as the label.

[0018] The phrase“being configured to”, as used in this application, shall in particular be understood as being specifically programmed, laid out, furnished or arranged. The term “quality”, as used in this application, shall particularly encompass, without being limited to, abstract objects such as classes as used for classification purposes, such as“pedestrian”,“vehicle”,“cyclists”, and so forth, as well as properties of objects, such as color and/or size.

[0019] By using an independent sensor signal for confirming a quality that has been derived by the at least one classifier or artificial neural network with a confidence level lower than the predetermined confidence level, and by making use of independent and complementary information obtained from another sensor, confirmed and labeled training data can readily be provided in a sufficient number for the purpose of supervised training sessions. It should be noted that after the completion of the step of confirming, at least a portion of the provided signal and the derived quality derived from the independent sensor signal may also be stored as labeled online training data, using the derived quality as the label, as the confidence level of this quality is also equal to or larger than the predetermined confidence level.

[0020] The invention is, without being limited to, in particular beneficially employable in automotive applications, smartphone technology and drone technology, but may as well be used in any other technical field in which complex sensor-based systems including a suitable classifier or an artificial neural network are used. The term “automotive”, as used in this patent application, shall particularly be understood as being suitable for use in vehicles including passenger cars, trucks, semi-trailer trucks and buses.

[0021] The sensor that provides signals from which the at least one classifier or artificial neural network derived the quality with a confidence level lower than the predetermined confidence level and the sensor that provided the at least one independent sensor signal from which the at least one classifier or artificial neural network device derives an output representing a quality with a confidence level that is equal to or larger than the predetermined confidence level may be sensors of the same type, i.e. sensors that are based on the same working principle. In other embodiments, the two sensors may be based on different working principles.

[0022] In preferred embodiments, the method further comprises a preceding step of providing the at least one classifier or artificial neural network in an offline mode with initial training results. In this way, an online training of the at least one classifier or artificial neural network by using the permanently stored labeled online training data can start from a higher level, and a larger training effect can be achieved in a shorter time period.

[0023] In preferred embodiments, the method further comprises a step of executing an online supervised training phase at least once in a predetermined time period by using at least the permanently stored labeled online training data. In this way, the at least one classifier or artificial neural network can be trained in a quasi-continuous manner, and the option of virtually continuously improving reliability and performance of the artificial intelligence sensor system can be provided.

[0024] Preferably, in cases in which the at least one artificial neural network includes a plurality of layers between an input layer and an output layer, the step of executing an online supervised training phase includes training only of preselected layers out of the plurality of layers. In this way, computational costs and hardware requirements and costs for digital data memory units can be reduced. Further, a set of data required for the online supervised training given by the permanently stored labeled online training data can be stored in a compressed manner and hence requires less memory space. Examples for artificial neural networks including a plurality of layers between an input layer and an output layer are deep neural network (DNN), in particular recurrent neural networks (RNNs) and convolutional deep neural networks (CNNs).

[0025] In preferred embodiments, the method further comprises a step of performance control by validating the current online training status, using a validation sensor data set with assigned correct labels that resides within the classifier or artificial neural network. In this way, reliability and performance of the at least one artificial neural network that is being modified during online supervised training phases can be ensured.

[0026] Preferably, the step of performance control comprises steps of

operating the at least one classifier or artificial neural network to derive, e.g. based on labeled training data that is resident within the at least one artificial neural network at present, outputs with confidence levels representing qualities for the provided validation sensor data set with assigned correct labels,

comparing the derived outputs with the assigned correct labels,

calculating a current performance figure that represents the result of the step of comparing,

comparing the current performance figure with a predetermined performance figure,

accept adaptations made in the at least one classifier or artificial neural network since executing the latest online supervised training phase if the current performance figure is equal to or exceeds the predetermined performance figure, and

reject adaptations made in the at least one classifier or artificial neural

network since executing the latest online supervised training phase if the current performance figure is lower than the predetermined performance figure.

[0027] In this way it can be ensured that only such adaptations within the at least one classifier or artificial neural network are implemented that improve the performance of the artificial intelligence sensor system, and adaptations that may diminish the performance of the artificial intelligence sensor system are rejected.

[0028] In another aspect of the invention an artificial intelligence sensor system is provided. The artificial intelligence sensor system includes at least one classifier or artificial neural network with an input side and an output side, and at least one sensor system that is operatively connected to the input side of the classifier or artificial neural network. Each sensor system comprises at least one sensor.

[0029] The classifier or artificial neural network is configured to provide, at the output side, an output representing a quality with a confidence level with regard to at least one object to be monitored or surveilled by applying at least one trained task to signals that have been received from the at least one sensor system.

[0030] The at least one task is trained by executing a method as disclosed herein, depending on a total number of sensors of the artificial intelligence sensor system.

[0031] The benefits described in context with the disclosed method of operating an artificial intelligence sensor system for supervised training purposes apply to the artificial intelligence sensor system to the full extent.

[0032] Preferably, the at least one sensor system comprises at least one out of an optical camera, a RADAR sensor system, a LIDAR (light detection and ranging) device and an acoustics based sensor, like an ultrasonic sensor. By that, sensor signals can be provided that allow to detect qualities representing characteristic features of an object to be monitored or surveilled in a variety of ways, wherein appropriate sensors can be chosen depending on the specific application.

[0033] Preferably, the at least one artificial neural network comprises at least one deep neural network. As explained before, this can provide perspectives of substantial savings of hardware costs such as for digital data memory units, and of computational effort.

[0034] In another aspect of the invention, it is proposed to use the disclosed artificial intelligence sensor system, including at least one out of an optical camera, a RADAR sensor system, a LIDAR device and acoustics based sensor, like e.g. an ultrasonic sensor, as an automotive vehicle exterior sensing system. In this way, an improved monitoring and surveying of other traffic participants can be accomplished.

[0035] The RADAR sensor system may be configured to be operated at a RADAR carrier frequency that lies in a frequency range between 20 GHz and 140 GHz.

[0036] In yet another aspect of the invention, it is proposed to use the disclosed artificial intelligence sensor system, including at least one out of an optical camera and a RADAR sensor system, as an automotive interior sensing system. Such automotive interior sensing systems can beneficially be employed for, without being limited to, a detection of left-behind pets and/or children, vital sign monitoring, vehicle seat occupancy detection for seat belt reminder (SBR) systems, and anti-theft alarm.

[0037] Also here, the RADAR sensor system may be configured to be operated at a RADAR carrier frequency that lies in a frequency range between 20 GHz and 140 GHz

[0038] These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.

[0039] It shall be pointed out that the features and measures detailed individually in the preceding description can be combined with one another in any technically meaningful manner and show further embodiments of the invention. The description characterizes and specifies the invention in particular in connection with the figures.

Brief Description of the Drawings

[0040] Further details and advantages of the present invention will be apparent from the following detailed description of not limiting embodiments with reference to the attached drawing, wherein:

Fig. 1 schematically illustrates an artificial intelligence sensor system in accordance with the invention installed in a vehicle in a detail side view,

Fig. 2 illustrates a schematic diagram of the artificial intelligence sensor system pursuant to Fig. 1 ,

Fig. 3 is a flow chart of a method in accordance with the invention of operating the sensor-based monitoring system pursuant to Fig. 1 for supervised online training purposes, and

Fig. 4 schematically illustrates a general structure of artificial neural modules of the artificial neural network of the sensor-based monitoring system pursuant to Fig. 1.

Description of Preferred Embodiments

[0041] Fig. 1 schematically illustrates an artificial intelligence sensor system 10 in accordance with the invention installed in a vehicle 44, which is designed as a sedan passenger car, in a detail side view. The artificial intelligence sensor system 10 is configured to be used as an automotive vehicle interior sensing system for a detection of for instance, but not limited to, left-behind pets and/or children, vital sign monitoring, vehicle seat occupancy detection for seat belt reminder (SBR) systems, and/or anti-theft alarm. To that end, the artificial intelligence sensor system includes two sensor systems 12, 14.

[0042] One of the two sensor systems 12, 14 is formed by an optical camera 12 that may be fixedly or movably connected to a chassis of the vehicle 44, or it may be integrated in the vehicle dashboard. The other one of the two sensor systems 12, 14 is designed as a RADAR sensor system 14 that is configured for monitoring vital signs of vehicle occupants 48, which may encompass at least one out of heartbeat and breathing. The RADAR sensor system 14 comprises a plurality of RADAR sensor devices formed as RADAR transceivers 16 that are attached in a front region of the car roof interior. In this specific embodiment, the RADAR sensor system 14 is formed as a phase-modulated continuous wave (PMCW) RADAR system configured to be operated at a RADAR carrier frequency that lies in a frequency regime between 20 GFIz and 140 GFIz, for example at a RADAR carrier frequency of 79 GFIz.

[0043] Although in this specific embodiment the artificial intelligence sensor system 10 is configured for use as an automotive vehicle interior sensing system, the artificial intelligence sensor system may also be used in automotive vehicle exterior sensing systems. In this case, the artificial intelligence sensor system may include at least one sensor system that comprises at least one out of an optical camera and a RADAR sensor system that are arranged in the vehicle to be directed towards an oncoming traffic.

[0044] With reference to Fig. 2 and Fig. 4, the artificial intelligence sensor system 10 further comprises an artificial neural network 18 that is configured for receiving and processing signals from sensors of the two sensor systems 12, 14. The artificial neural network 18 includes two neural network modules 26, 34 designed as deep neural networks (DNN), each DNN having an input side 28, 36 connected to an input layer 20, an output side 30, 38 connected to an output layer 22 and a plurality of intermediate layers 24 between the input layer 20 and the output layer 22, wherein each one of the layers 20, 22, 24 comprises a plurality of interconnected artificial neurons, as is well known in the art.

[0045] Each one of the two sensor systems 12, 14 is operatively connected to the input side 28, 36 of one of the neural network modules 26, 34. A first one 26 of the two neural network modules 26, 34 is configured to provide at its output side 30 an output representing a quality with a confidence level with regard to at least one object to be monitored or surveilled by applying a trained task to signals xA that have been received from the optical camera 12. A second one 34 of the two neural network modules 26, 34 is configured to provide at its output side 38 an output representing a quality with a confidence level with regard to at least one object to be monitored or surveilled by applying a trained task to signals xB that have been received from the RADAR sensor system 14. As will be described later, the task is trained by executing a method in accordance with the invention.

[0046] It is noted herewith that the terms“first”,“second”, etc. are used in this application for distinction purposes only, and are not meant to indicate or anticipate a sequence or a priority in any way.

[0047] The artificial neural network 18 further includes an external memory unit 42. Data are independently exchanged between the two neural network modules 26, 34 and the external memory unit 42 by means of individually assigned, cooperating data management systems 32, 40 (Fig. 3) that form part of the artificial intelligence sensor system 10.

[0048] In the following, an embodiment of a method in accordance with the invention of operating the artificial intelligence sensor system 10 for supervised training purposes will be described with reference to Fig. 3, which schematically illustrates a chronological sequence of steps of the method that are to be executed iteratively. In preparation of operating the artificial intelligence sensor system 10, it shall be understood that all involved units and devices are in an operational state and configured as illustrated in Figs. 1 and 2.

[0049] Furthermore, in a preceding step the artificial neural network 18 has been provided with initial permanently stored training results in an offline mode. The initial permanently stored and labeled training results reside in the external memory unit 42 of the artificial neural network 18.

[0050] In a first step 50 of the method, signals xA l from the optical camera 12, usually representing the contents of at least one image, are provided at a time as input data to the first neural network module 26. Simultaneously or virtually simultaneously, signals xB from the RADAR sensor system 14 are provided as input data to the second neural network module 34 in another step 52.

[0051] In a next step 54, the first neural network module 26 is operated to derive, e.g. based on labeled training data resident within the external memory unit 42 of the artificial neural network 18, an output representing a quality with a confidence level regarding the provided optical camera signals xA l. For the optical camera 12, the quality is given by classes such as, but not being limited to,“adult”, “child”,“pet”,“empty child restraint system” and“occupied child restraint system”. At the same time
in another step 56, the second neural network module 34 is operated to derive, e.g. based on labeled training data resident within the external memory unit 42 of the artificial neural network 18, an output representing a quality with a confidence level regarding the provided RADAR sensor system signal xB . For the RADAR sensor system 14, the quality is given by the same classes as for

the optical camera 12, but derived e.g. from a breathing amplitude determined by the RADAR sensor system 14.

[0052] For the sake of argumentation it is assumed that the confidence level of the class derived from the RADAR sensor system signals xB by the second artificial neural module 34 is equal to or larger than a predetermined confidence level. In this case, the provided signal and the derived quality are permanently stored in the external memory unit 42 as labeled online training data, using the derived quality as the label, in another step 58 that is executed by the data management system 40 assigned to the second artificial neural module 34.

[0053] For the sake of argumentation it is further assumed that the confidence level of the class derived from the optical camera signals xA l by the first artificial neural module 26 is lower than a predetermined confidence level. This can for instance be due to bad illumination in the car interior. In this case, the data management system 32 that is assigned to the first artificial neural module 26 may temporarily store the optical camera signals xA l and the derived quality, i.e. class, in the external memory unit 42 in another step 60 of the method.

[0054] In a next step then, the class derived from the optical camera signals xA l having the derived confidence level that is lower than the predetermined confidence level is confirmed by use of an independent sensor signal.

[0055] In the embodiment, the step 62 of confirming is carried out by use of the signals xB l of the RADAR sensor system 14 as the independent sensor signal, which were taken at the same time tl and on the basis of which the second artificial neural module 34 derived a class with a confidence level that is equal to or larger than the predetermined confidence level. In the example, thus, although it cannot be derived from the optical camera signals xA l with a large enough confidence level whether the vehicle seat 46 (Fig. 1 ) may be occupied by an adult, a child or a pet, the signals xB l from the RADAR sensor system 14 allow for a classification with a sufficiently large confidence level that the vehicle seat 46 is occupied by an adult. After completion of the step 62 of confirming, in which the quality derived from the camera signals xA l and the quality derived from the RADAR sensor system signals xB l are affirmatively checked to be identical, the temporarily stored signals xA l provided by the optical camera 12 at time and the derived quality are permanently stored in the external memory unit 42 by the assigned data management system 32 as labeled online training data in another step 64, using the derived quality as the label.

[0056] The method further comprises executing an online supervised training phase at least once in a predetermined time period, which in this specific embodiment is one week, by using the permanently stored labeled online training data that are available at that point of time.

[0057] It can in principle not be taken for granted that executing online training phases leads to improved system performance and reliability. Hence, for monitoring and controlling the effect of online training phases the method comprises a step of performance control by validating a current online training status, using a validation sensor data set with assigned correct labels that resides within the external memory unit 42 of the artificial neural network 18.

[0058] A validation data set ¾ai with labels Yvai has been installed in the external memory unit 42, and a desired reference performance rev_val has been predefined. After each execution of an online supervised training phase, the artificial neural modules 26, 34 process the data set Xval and the derived outcome is compared with the correct labels Yval to calculate a current performance curr_val. The latter is then compared to the reference performance rev_val. Based on the result of the comparison, the artificial intelligence sensor system 10 decides whether to accept or to reject the adaptions performed during the executed online supervised training phase.

[0059] In a potential embodiment, a corresponding pseudo code may look as follows:

[0060] Notation:

(Xva Yvai ) - validation samples

(x online online) - online recorded training samples

Mg - Al module with trainable parameters Q

MQ (X) - application of the Al module on input data set X

q0 parameters of the module operating in the system

d(Mg (X), Y) - suitable distance measure to compare the output Mg (X) with a desired output Y

R(Mg, X, Y ) - online (re-) training function of the Al module with current parameters Q . This function returns the new parameters Q defining the (re-) trained model Mg .

[0061] Pseudo code:

if number of {Xonline, Yoniine ) > retrain threshold


eptance threshold

q0 = q

Clear ( Xonline> ^ online )

[0062] Suitable distance measures to evaluate the performance of the online-trained artificial neural modules 26, 34 could be the following:

1.
denoting some mathematical norm like Euclidean norm.

2.
with ||·|| denoting some mathematical norm like Euclidean norm and wx y denoting weights to encode the importance of different samples-label pairs ( x, y ) for the system performance.

3. Versions in 1. or 2. with the norm ||-|| being replaced by some other distance measure of the output (like cross-entropy, Kullback-Leibler divergence or any other distance measure that appears to be suitable to those skilled in the art)

4. d(Mg (X), Y ) could as well be a whole programmed module that performs a dedicated comparison of Mg (X) and Y.

[0063] A substantial reduction of hardware and computational costs is achieved as during the step of executing an online supervised training phase, only preselected layers of the plurality of layers 20, 22, 24 of the DNNs are being trained (Fig. 4). The training is implemented in form of an optimization problem, wherein an objective function encodes that the training input should be mapped to the desired output, namely the labels, with respect to some distance. A detailed description can be found for instance in Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y.:“Deep learning", The MIT press, Cambridge. MA, USA (2016), ISBN 978-0262035613, which shall hereby be incorporated by reference in its entirety with effect for those jurisdictions permitting incorporation by reference.

[0064] A general structure of the artificial neural modules 26, 34 designed as deep neural networks (DNN) of the artificial neural network 18 of the artificial intelligence sensor system 10 pursuant to Fig. 1 is provided in Fig. 4.

[0065] Within the additional steps of the method, only the parameters of some preselected deeper layers are being adjusted, which in this specific embodiment are all the parameters corresponding to layer k+1 to layer L. Firstly, this has the advantage that an effort for optimization can significantly be reduced with regard to computational costs, as fewer parameters must be adapted. Secondly, the architecture of the DNN is chosen such that the input dimension to layer k is significantly lower than din. In this specific embodiment, only the parameters in the last layers that receive data of dimension < 100 are adjusted, whereas the input dimension din of an image of the optical camera 12 is in the range of several thousands. This has the advantage that the labeled online training data acquired during lifetime of the artificial intelligence sensor system 10 by the methods described above can be stored in a compressed way, namely in form of the lower dimensional output of layer k.

[0066] For this specific embodiment of the method, the pseudo code that realizes the steps of

data labeling,

validation of online training and performance control, and

efficient online training

may be given by the following.

[0067] Pseudo code for online training of the neural network module 26 (module A) that processes data of the optical camera 12 (sensor A), utilizing the information from the neural network module 34 (module B) that processes data from the RADAR sensor system 14 (sensor B). Both modules should perform the

same classification/prediction task but based on different input data (different sensors).

[0068] Notation:

XA, XB - data received at sensor , respectively B

- module B

- module A, an implementation of neural network module 26 as described, with online trainable parameters Q that define only layer k + 1 to L

MB (X) - output of module B when applied to data x

MA,g (x) - output of module A when applied to data

MA,k (x) - output of the neural network module 26, that represents module

A, at layer k. It is assumed that dimension dk is significantly lower than the input dimension din.

M A,e,k,L (.xk ) - output of the network consisting only of layers k + 1 to L when applied to data xk having dimension dk, i.e. xk = MA k (x) with x being some data sensed by sensor

(X online > ^ online ) - online recorded training samples and labels stored as input for

MA,g,k,L (dfc-dimensional)


- validation data set for module A, stored as input for MA g L (dk- dimensional). That is, offline generated/measured sensor A data on which the encoding of MA k has been applied.

q0 - parameters of module A in the operating in the system

d{Mg{X), Y) - suitable distance measure to compare the output MA q i(C) with a desired output Y

R(MML, X, Y) - online (re-) training function of layers k + l, ... , L of module A with current parameters Q. This function returns the new parameters Q defining the trained model MAgfe i (which changes the whole module as MA g = MA g k L ° MA k, with ° the composition of the two network parts)

while lifetime of the system

if M QO (Xa) is below minimum confidence and MB (xB) gives y with sufficiently high confidence

Write (MA.R ^XA * y) to {X online- Y online')

if number of samples/labels (Xonune> Yoniine ) > retrain threshold


eptance threshold

q0 = q

Clear (X online’ Yonline )

[0069] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.

[0070] Other variations to be disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or“an” does not exclude a plurality, which is meant to express a quantity of at least two. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting scope.

List of Reference Symbols

10 Al sensor system 36 input side

12 optical camera 38 output side

14 RADAR sensor system 40 data management system

16 RADAR transceiver 42 external memory unit

18 artificial neural network 44 vehicle

20 input layer 46 vehicle seat

22 output layer 48 vehicle occupant

24 intermediate layer xA l optical camera signals acquired at 26 neural network module time t-L

28 input side xAi2 optical camera signals acquired at 30 output side time t2

32 data management system xB 1 RADAR sensor system signals 34 neural network module acquired at time t±

Method steps:

50 provide optical camera signals to first neural network module

52 provide RADAR sensor system signals to second neural network module 54 operate first neural network module to derive output

56 operate second neural network module to derive output

58 permanently store RADAR sensor system signals and derived quality

60 temporarily store optical camera signals and derived quality

62 confirm class derived from optical camera signals by use of RADAR sensor system signals

64 permanently store temporarily stored optical camera signals and derived quality