Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2020161481 - PROCÉDÉ ET APPAREIL PERMETTANT UNE PRÉDICTION DE QUALITÉ

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

Claims

1. A method of selecting an operation for analysis of data, the method comprising: processing the data using at least two operations, wherein each of the at least two operations are different, to obtain a set of outputs including the output associated with each operation;

determining an output from the set of outputs with the highest predicted accuracy; and

selecting the operation associated with the determined output for further analysis of data;

wherein the determination of the output with the highest predicted accuracy comprises:

selecting an output from the set of outputs;

calculating a degree of similarity between the selected output and another output of the set of outputs;

using the degree of similarity to predict the accuracy of the selected output based on a relationship between the similarity of the outputs and the accuracy of the outputs, the relationship being derived from an analysis of the degrees of similarity between the outputs of the operations on training data including ground truth and the accuracy of each output compared to the ground truth;

selecting a further output from the set of outputs;

calculating a further degree of similarity between the selected further output another output of the set of outputs;

using the further degree of similarity to predict the accuracy of the further selected output based on the relationship; and

determining the output with the highest predicted accuracy of the selected outputs.

2. The method of claim 1, further comprising obtaining a combined output by combining at least two of the outputs associated with each operation;

wherein the combined output is additionally included in the set of outputs.

3. The method of claim 1 or 2, wherein the relationship is a regression or

classification model obtained by comparing the degree of similarity of the selected output and the other output of the set of outputs obtained for the set of training data to the degree of similarity of the selected output and ground truth of the training data.

4. The method of any preceding claim, wherein the degree of similarity is obtained using a Dice similarity coefficient.

5. The method of any preceding claim, wherein the difference between each of the at least two operations is predetermined.

6. The method of any preceding claim, wherein at least one of the at least two operations has been optimised for analysis of an image.

7. The method of any preceding claim, wherein at least one of the at least two operations is performed by a fully convolutional neural network.

8. The method of any preceding claim, wherein at least two of the at least two operations are performed by neural networks and the neural networks differ by at least one of: their number of layers, number of parameters, their parameters (including weights), their hyper-parameters, the data used to train them.

9. The method according to any preceding claim, further comprising analysing further data using the selected operation.

10. The method of claim 9 when dependent on claim 2 wherein, when the output with the highest predicted accuracy is a combined output, the selected operation is a

combination of the set of operations associated with the combined output.

11. The method of any one of claims 1 to 8, further comprising analysing further data, wherein the method of claim 1 is performed on the further data to determine a further selected operation and the further data is analysed using the further selected operation.

12. The method of claim 11 when dependent on claim 2 wherein, when the output with the highest predicted accuracy is a combined output, the further selected operation is a combination of the set of operations associated with the combined output.

13. The method of any preceding claim, wherein the data is in the form of an image.

14. The method of claim 13, wherein the output of each operation is a segmentation of the image.

15. The method of claim 13 or 14 when dependent on claim 2 wherein, for the combined output, segmentation of each pixel of the image is determined depending on the agreement of, for the respective pixel, a threshold number of the outputs comprising the combined output.

16. The method of claims 13 to 15, wherein the analysis determines the shape of a section of a human body recorded in the image.

17. The method of claim 16, wherein the section of the human body is the left ventricular myocardium.

18. The method according to any preceding claim, further comprising alerting an operator if the highest predicted accuracy drops below a predetermined threshold.

19. An analysis device, comprising:

an analyser configured to perform the method of any one of claims 1 to 18.

20. A computer program comprising code means that, when executed by a computer system, instruct the computer system to perform the method of any one of claims 1 to 18.

21. A computer system for selecting an operation for analysis of data, the system comprising at least one processor and memory, the memory storing code that perform the method of any one of claims 1 to 18.