Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020108785 - METHOD AND DEVICE FOR TRAINING A NEURAL NETWORK TO SPECIFY LANDMARKS ON 2D AND 3D IMAGES

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

METHOD AND DEVICE FOR TRAINING A NEURAL NETWORK TO SPECIFY LANDMARKS ON 2D AND 3D IMAGES

FIELD OF THE INVENTION

[0001] The present invention generally relates to two-dimensional or three-dimensional image data and in particular how to specify landmarks therein by the calculation of presence, position and precision values.

BACKGROUND OF THE INVENTION

[0002] The following discussion of the prior art is intended to facilitate an understanding of the invention and to enable the advantages of it to be more fully understood It should be appreciated, however, that any reference to prior art throughout the specification should not be construed as an express or implied admission that such prior art is widely known or forms part of common general knowledge in the field.

[0003] The recognition and positioning of landmarks on images is an important task in many fields. Often such recognition and positioning of landmarks is performed manually or semi-automatically. Computer-aided systems for automatically recognizing and positioning landmarks on images are, due to its complexity and variety, one of many areas of the computer vision discipline, which requires continuous adaptations and improvements.

[0004] Machine learning with deep neural networks is driving the development of computer vision, bringing greater speed and accuracy to tasks such as image landmark positioning. On the strength of this growth, technical solutions across a wide range of sectors (e.g. healthcare, quality control) are putting image processing applications to work to improve existing processes and automate human-driven tasks.

[0005] In itself the general training of neural networks is well-known, whereas the training of neural networks for landmark positioning is a process with significant improvement potential. From previous methods, when training neural networks for landmark positioning, no information is provided on the presence of the landmarks, nor are there any indications as to the precision of the landmarks’ positioning. In particular, a value for the precision of the landmarks is important in some use cases in order to be able to initiate optimization or correction processes. It is common practise to do landmark positioning with neural networks, whereby the presence of landmarks is assumed or verified beforehand.

[0006] The object of the present invention is to overcome the above detailed problems of the prior art, or at least to provide a useful alternative.

SUMMARY OF THE INVENTION

[0007] The present invention solves the above object with the subject-matter of the independent claims, and in particular, the presence of landmarks is classified within the same network and is used to strengthen the precision of the landmarks’ positioning. The dependent claims comprise advantageous further embodiments. In particular, with the present invention, the focus is on training neural networks for a fully automatic positioning of landmarks and their extension to presence and precision statements.

[0008] The presence of landmarks can be defined in one or more of the following aspects: inside or outside, concealed or visible, exist or non-existing. In more detail, inside or outside means whether the landmark position coordinate is inside of the processed image or outside of the processed image. The location of landmarks that are determined to be outside of an image can be determined for example by extrapolation. As a simple example, the location of an eye that is not on the image can be determined with high probability if the other eye and the nose are on the image.

[0009] Regarding whether a landmark is visible or concealed, an example could be an ear that is covered by hair. The landmark would be inside the image, if the image shows the part of the head, but concealed. The position of the landmark could be determined, wherein the precision of the location depends on whether for example the other ear can be seen on the image as would be the case in a frontal image, or whether this is not the case if the person is shown from the side only. Another example would be an eye that is covered by opaque sunglasses.

[0010] Regarding whether the landmark exists on the image or not, an example is an image of an unrelated object, where facial features would not be existing.

[0011] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are intended to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense, i.e. "including, but not limited to" .

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] The above, as well as additional objects, features and advantages of the present invention, will be better understood through the following illustrative and nonlimiting detailed description of embodiments of the present invention, with reference to the appended drawings, where the same reference numerals will be used for similar elements, wherein:

Fig. 1 shows simplified examples of two-dimensional and three-dimensional images; Fig. 2 shows a neural network processing an input image and generating as output three vectors with position, precision and presence values specifying the landmarks related to the input image;

Fig. 3 shows the neural network’s high-level components and connections;

Fig.4 shows the internal structure of a sub neural network and how to extract a feature vector;

Fig. 5 shows the data flow in relation to three sub neural networks, while decoding a feature vector into landmark specifications;

Fig. 6 shows the internal structure of a sub neural network to calculate landmark positions, while factoring in landmarks’ presence values;

Fig. 7 shows a loss function to compute precision loss values out of landmark positions;

Fig. 8 shows a visualization of ground truth landmark maps and example prediction maps of landmarks reflecting positions and precision;

Fig. 9 shows an extension of Figure 2 by factoring in landmarks’ pre-positioning as input data;

Fig. 10 shows an extension of Figure 3 by factoring in landmarks’ pre-positioning as input data;

Fig. 11 shows the internal structure of a sub neural network to calculate landmark positions, while factoring in pre-positioned landmark values;

Fig. 12 shorn an extension of Figure 9 by factoring in landmarks’ pre-positioning as input data in a recursive variant;

Fig. 13 shows an example of landmarks being outside of the input image, while being inside of the ground truth landmark map;

Fig. 14 shows a loss function processing landmarks data, whereby individual missing landmark elements within the landmark ground truth are ignored;

Fig. 15 shows different variants of ground truth landmarks reflected as ground truth points, ground truth lines, and/or ground troth regions.

DETAILED DESCRIPTION

[0013] Figure 1 shows, as a simplified example, a two-dimensional 110 and three-dimensional 130 image, including an insight into the internal structure 120, 140, 145 of the images. The images show abstract faces and are intended to serve as a facial landmark demonstrator, whereby a neural network shall specify landmarks of the abstract faces. However, the landmarks can also be anatomical, fiducial, statistical, mathematical, geometrical, facial, key-point or pseudo landmarks. While the purpose and the application of the different landmarks might be different, the underlying technological structures are the same. The internal structure 120 of the two- dimensional image is represented as a two-dimensional matrix, whereby the matrix elements are called pixels and can be addressed by x and y coordinates. The internal structure 140, 145 of the three-dimensional image is represented as a three- dimensional matrix, whereby the matrix elements are called voxels and can be addressed by x, y and z coordinates. In the examples the pixels and voxels have integer values ranging from 0 to 255 for grey-scale images with a single colour channel, whereas for RGB-colour images the pixels and voxels have three channels.

[0014] It is common practice to normalize all mage values to floating point values ranging from 0 to 1 or ranging from -1 to 1 before they are further processed by any neural network activity.

[0015] Image normalization function 1 [0, 1]:

[0016] Image normalization function 2 [-1, 1]:

[0017] Representatively, some of the other figures will use only the two-dimensional image as input data to describe the functionality and training of the neural network.

[0018] Figure 2 shows a neural network 220 processing an input image 210 and generating as output three vectors with presence value 230, position value 240. and precision value 250 specifying the landmarks in relation to the input image. In relation to the image can mean on the image and the position on the image, but it can also mean outside of the image and the position with reference to the image. In computer- aided automation systems for positioning of landmarks on images this is alternatively also referred to as location, detection, registration, tracing or recognition of landmarks.

The present invention not only determines the position of the landmarks, but also provides landmarks’ presence and precision values, also referred to as specifying landmarks or landmark specification hereafter. In the following the neural network will be explained in more detail with reference to the corresponding Figures.

[0019] Figure 3 shows, by way of example, a neural network’s high-level hidden layer components 320, 330, 340, the neural network’s input 310, the neural network’s aggregated output 350 and the neural network’s high-level connection flow. The hidden layer components labelled with the term„sub neural network" will be used in this document to describe a set of ,, connected hidden neural network layers" within a neural network. A sub neural network 320 extracts image features from input image data 310 and encodes the features into a feature vector 330 as an internal intermediate result. The extraction process and the feature vector will be explained in more detail with reference to figure 4. The sub neural network 340 will decode the feature vector 330 generated by 320. The decoding process of a feature vector will be explained in more detail with reference to figures 5, 6, and 7, whereby the feature vector encodes all information to specify landmarks’ presence, position and precision.

[0020] Figure 4 shows, by way of example, a sub neural network of a neural network to extract a feature vector 430 from a two-dimensional input image 410. In general a feature is an individual measurable property or characteristic of a phenomenon being observed. Choosing informative, discriminating and independent features is a crucial step for solving classification and regression problems with neural networks. A feature vector is a vector containing multiple features. The features may represent a pixel or a whole object in an image. Examples of features are colour components, length, area, circularity, gradient magnitude, gradient direction, or simply the grey-level intensity value. What exactly a neural network considers an important feature is derived throughout learning. The convolution layers used in the feature extraction 420 find and store those features in feature maps. A certain combination of features in a certain area can signal a larger, more complex feature that exists there. A feature map can be redundant and too large to be managed efficiently. Therefore, a preliminary step in many applications of machine learning consists of subsampling feature maps by selecting a subset of features or by constructing a new and reduced set of features to facilitate learning, as well as to improve generalization and interpretability. However, this is not limiting to the invention, because to enable feature extraction from three-dimensional mage data, the convolutions can be used, for example, with three-dimensional kernel filters or in combination with recurrent capabilities like long shortterm memory networks.

[0021] Figure 5 shows, by way of example, the data flow in relation to three sub neural networks 522, 524, 526 of a neural network 220, 920, 1220 to decode a feature vector 510 into vectors with presence values 530, position values 540, and precision values 550. In general, there are various well-known ways how to build and train such sub neural networks (e.g. with fully-connected layers) isolated, without vertical connections indicating value input, for example from landmark presence value 530 to the sub neural network for position 524, or from landmark position values 540 to precision loss function 580. The sub neural network for presence 522, as the name implies, solves a classification problem for determining the presence of landmarks, whereby the output layer could be implemented as a sigmoid activation function returning probability values between [0, 1], whereby values closer to zero indicate a landmark might not be present on the image and values closer to one indicate a landmark might be present on the image. The output layer with the landmarks’ presence values also serves as input for the sub neural network for position 524. The sub neural network for position 524, as the name implies, solves a regression problem for determining the position of landmarks, whereby the output layer could be implemented as a sigmoid activation function returning prediction values between [0, 1], whereby a value represents an element of a coordinate, whereby a value pair forms a normalized coordinate for two-dimensional mages and a value triple forms a normalized coordinate for three-dimensional images. A normalized coordinate can be converted either to a pixel or voxel coordinate.

[0022] Convert normalized coordinate to pixel:

[0023] Convert normalized coordinate to voxel:

[0024] The output layer with the landmarks’ position values also serves as input for the loss function 580 (see also 730 in Figure 7) of a sub neural network for precision 526. The sub neural network for precision 526, as the name implies, solves a regression problem for determining the precision of landmarks’ positioning, whereby the output layer could be implemented as a sigmoid activation function returning prediction values between [0, 1], whereby a value closer to zero indicates a landmark might be imprecisely positioned on the image and a value closer to one indicates a landmark might be precisely positioned on the image. Also shown are loss functions for presence values 560 and for position values 570.

[0025] Figure 6 shows, by way of example, the internal structure of a sub neural network 630 to calculate landmark positions 640, while factoring in landmarks’ presence values 633, 636. The sub neural networks 631, 634 and neural network layers 632, 635 could be implemented as fully-connected layers. Typically, the sub neural network 631 has only few layers or may not exist at all. In that case, the feature vector 610 is directly processed by the neural network layer 632. Factoring in landmarks presence values means to concatenate 633 the landmarks’ presence 620 and the neural network layer 632, thereby forming a neural network layer with a layer size of the size of the landmarks presence 620 plus the size of the neural network layer 632, which is fully-connected with neural network layer 634, whereby the neural network layer 634 may profit from the additional information provided by the landmarks’ presence 620. Furthennore factoring in can optionally comprise merging 636 the landmarks’ presence vector 620 and the neural network layer 635 element-wise, such that the result will have the same layer size, thereby forming a neural network layer with the same layer size as the landmarks’ presence vector 620 or the neural network layer 635. Within the merge process, the landmarks’ presence vector’s 620 values will be binarized by a threshold to zero or one values, whereby an element from the neural network layer 635 will stay unchanged, if the corresponding binarized element from the landmarks’ presence 620 has a value of one, whereby an element from the neural network layer 635 will be set to a value representing undefined (e.g. -1, null, undefined, none), if the corresponding binarized element from the landmarks’ presence 620 has a value of zero. It should be noted that similar processing can be applied to the precision and presence values of the landmarks.

[0026] Figure 7 shows, by way of example, a loss function 730 to compute precision loss values out of landmark positions 710 (540 in Figure 5) within the neural network training. In general, a neural network is trained to minimize the loss values, whereby the calculated output values from a neural network are computed with a priori known ground truth values 740. The ground truth values for a precision loss computation are unknown and can be determined 731 after the position of the landmarks is generated by the neural network, whereby the ground truth position 740 is used. Two examples on how to determine precisions 731 are given below.

[0027] Simple determine precisions 1, 731 :

[0028] The“simple determine precisions 1” above will get close to one for all calculated positions of the landmarks, if the corresponding neural network performs well.

[0029] Static scaled determine precision 2, 731 :

[0030] The“static scaled determine precisions 2” above will be zero for all calculated positions of the landmarks until the scale factor is reached (for example if scale is 100, the absolute difference of cp and gt must be below 0.01 (1 percent) to reach a precision greater than zero). An appropriate scale value can be derived through heuristics.

Predetermined precision values 720 can be taken into account together with the determined precision values 731 by the loss calculating 732 to calculate the precision loss.

[0031] Figure 8 shows, by way of example, a visualization of ground truth landmark maps 810, 812 and example prediction maps of landmarks 820, 830 reflecting positions and precision. In the descriptions before and after (unless specifically indicated) the positions of landmarks were defined by coordinate values, whereas in this figure we use maps instead to provide a visualized representation. The sub neural networks 520 could be implemented using upsampling neural network layers. When comparing prediction maps 820 and 830, it is obvious that prediction map 820 gives a more confident landmark position than 830. The ground truth landmark map 812 shows a possible implementation of how to predict positions outside of the input image. This is described in more detail below with reference to Figure 13.

[0032] Figure 9 shows, by way of example, an extension of figure 2, whereby the neural network 920 (220 in Figure 2) factors in a set of pre-positioned landmarks 960 as additional input data, whereby pre-positioned landmarks as additional input data should be used to strengthen the neural networks’ output. Pre-positioned landmarks 960 can be supplied to the neural network or generated by the neural network during a separate operation.

[0033] Figure 10 shows, by way of example, an extension of figure 3, whereby the neural network factors in a set of pre-positioned landmarks 1060 as additional input data, whereby pre-positioned landmarks as additional input data will be computed within the sub neural network 1040, whereby the input connection from 1060 to 1040 would shift to a connection from 1060 to 1020 when the pre-positioned landmarks 1060 are in map format, as can be seen in Figure 8.

[0034] Figure 11 shows, by way of example, the internal structure of a sub neural network 1130 (524 in Figure 5) to calculate landmarks’ positions 1140, while factoring in pre-positioned landmarks’ values 1120, whereby figure 6 describes a similar internal structure of a sub neural network except for the merge computations 1136 (636 in Figure 6). Here the merge 1136 is optional when considering pre-positioned landmarks and it is only meaningful in case the landmarks’ pre-positions 1120 exist in the generated landmarks’ position 1140, too. Otherwise the merge is skipped and the neural network layer 1135 is directly connected to 1140. The landmarks’ pre-position vector 1120 and the neural network layer 1135 have the same layer size and will be element-wise merged, forming a neural network layer with the same layer size as 1120 or 1135, whereby an element-wise merge will be skipped, if the landmarks’ pre position 1120 values are marked as ignore pre-positioned“ (e.g. by a negative value, like minus one), otherwise the landmarks’ pre-position 1120 values will override the values from the neural network layer 1135. The remaining elements are similar to the description of Figure 6, in particular, the concatenation 1133 (633 in Figure 6). It should be noted that similar processing can be applied to the precision and presence values of the landmarks.

[0035] Figure 12 shows, by way of example, an extension of figure 9 by factoring in landmarks’ pre-positioming as input data in a recursive variant, at least partly using the landmarks’ position values 1240 as pre-positioned landmarks’ values 1260, whereby there are multiple possibilities, which landmark positions 1240 shall be used as a recursive landmarks’ pie-positioning input 1260, whereby one example would be to use all landmarks’ positions 1240 without the merge 1236 within the sub neural network for position values 1230, while another example would be to choose only some landmarks’ position values 1240 ruled by the landmarks’ presence values 1230

and the landmarks’ precision values 1250, whereby an example rule might be to choose three landmark position values 1240 with the highest landmarks’ precisions values 1250 and where the landmarks’ presence value 1230 must be true.

[0036] Figure 13 shows, by way of example, a position of a landmark being outside of the input image 1320, while being inside of the ground truth landmark map 1330. In the descriptions before and after (unless specifically indicated) the positions of landmarks were defined by coordinate values, whereby in this figure we use maps instead to provide a visualized representation, therefor the sub neural networks 520 could be implemented using upsampling neural network layers. An implementation example of a coordinate-based ground truth would require an update on the sub neural network for position 524, whereby the output layer has to be changed from a sigmoid activation to a linear activation (with an optional minimum and maximum output). In cases, where the presence value indicates that the landmark is located outside of the image, as can be seen on landmark map 1330, merging 636 can be ignored, in order to avoid detrimental effects on the learning success. In cases where the presence value indicates that the landmark is concealed or non-existent, merging 636 can optionally be used as described above.

[0037] Figure 14 shows, by way of example, a loss function 1420 processing landmarks’ data 1430 and landmarks’ ground truth data 1410, whereby individual missing (set to undefined) landmark elements within the landmark ground truth 1410 are ignored 1421, whereby the loss calculation 1422 can be arbitrary. Generally, when training neural networks, the weights in the neural network are adjusted based on the computations of a loss function, whereby individual elements of the loss functions’ generated output may be zero and will not trigger updates on the neural networks’ weights. The loss vector 1440 may contain zero value elements, either by a result from the loss calculation 1422 or because of missing elements within the ground truth 1410. [0038] Figure 15 shows, by way of example, visualisations of different variants of ground truth landmarks reflected as ground truth points 1510, 1511, 1512, ground truth lines 1520, and/or ground truth regions 1530, 1531.

[0039] In the descriptions before and after (unless specifically indicated) the positions of landmarks were defined by coordinate values, whereby in this figure we use maps instead to provide a visualized representation, therefore the sub neural networks 520 could be implemented using upsampling neural network layers. The purpose of the ground truth examples 1510, 1511, 1512 is to position a landmark, in which a neural network is trained to minimize the distance to a specific point coordinate. The purpose of the ground truth example 1520 is to position a landmark, in which a neural network is trained to minimize the distance to a specific line (e.g. the line between two other landmarks). An example of a loss function able to process coordinate-based ground truth lines might be implemented as:

[0040] The purpose of the ground truth examples 1530, 1531 is to position a landmark, in which a neural network is trained to minimize the distance to a specific region, which could for example be of polygon or polyhedron nature.

[0041 ] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular

implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein.

The invention is not limited to any particular programming language or operating system.

[0042] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an

embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.

[0043] Furthermore, while some embodiments described herein include some, but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art For example, in the following claims, any of the claimed embodiments can be used in any combination.