Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2018042211) MULTI-MODAL MEDICAL IMAGE PROCESSING
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

CLAIMS:

1. A method for automatically identifying regions of interest in medical or clinical image data, the method comprising the steps of;

receiving unlabelled input data, the input data comprising data from one of a plurality of modalities of data;

encoding the unlabelled input data using a trained encoder; determining a joint representation using a trained joint representation module; and

generating labelled data for the input data by using the joint representation as an input for a trained classifier.

2. The method of claim 1 , wherein the encoder, the joint representation module and the classifier are trained with input training data comprising a plurality of modalities.

3. The method of any preceding claim wherein one or more modalities of input data is provided.

4. The method of claim 3 wherein the input data comprises one or more of: a mammography; an X-ray; a computerised tomography (CT) scan; a magnetic resonance imaging (MRI) data; histology data; mammography data; genetic sequence data and/or an ultrasound data.

5. The method of any preceding claim wherein the joint representation module is trained using one or more outputs received from the one or more trained encoders.

6. The method of any preceding claim wherein the joint representation module receives the encoded data as three-dimensional tensors of floating point numbers.

7. The method of any preceding claim wherein the joint representation is in the form of a vector.

8. The method of any preceding claim wherein generating labelled data further comprises generating an indication of one or more regions of interest in the unlabelled input data.

9. The method of any preceding claim wherein the number of modalities of unlabelled

input data is one fewer than the number of modalities of input training data used to train the encoder, the joint representation module and the classifier.

10. A method of training a classifier for medical or clinical data, comprising the steps of:

receiving unlabelled input data from a pre-labelled data set, the input data comprising data from a plurality of modalities;

encoding the unlabelled input data from a plurality of modalities to form a joint representation;

performing classification using an adaptable classification algorithm on the joint representation to generate labelled data from the joint representation;

comparing pre-labelled data from the pre-labelled data set to the labelled data and outputting comparison data;

adjusting the adaptable classification algorithm in response to the comparison data; and

repeating the steps of the method until the comparison data has reached a predetermined threshold indicating that no further adjustments need to be made to the adaptable classification algorithm.

11. A method of training a classifier for medical or clinical data according to claim 1 , wherein the step of encoding the unlabelled input data from a plurality of modalities to form a joint representation is performed by a plurality of connected and/or paired encoders.

12. A method of training a classifier for medical or clinical data according to claims 1 or 2, wherein the input data comprises data from a plurality of sources.

13. A method of training a classifier for medical or clinical data according to any preceding claim, wherein two modalities of input data are received.

14. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the unlabelled input data is in the form of one or more medical images.

15. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the unlabelled input data is in the form of a plurality of medical images.

16. A method of training a classifier for medical or clinical data according to claim 6, wherein the plurality of medical images is related.

17. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the input data comprises one or more of: a mammography; an X-ray; a computerised tomography (CT) scan; a magnetic resonance imaging (MRI) scan; and/or an ultrasound scan.

18. A method of training a classifier for medical or clinical data according to any of claims 5 to 8, wherein the medical image is in the form of a DICOM file.

19. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the step of encoding the unlabelled input data from a plurality of modalities to form a joint representation is performed separately for each modality.

20. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the adaptable classification algorithm comprises a machine learning algorithm.

21. A method of training a classifier for medical or clinical data according to any preceding claim, wherein the adaptable classification algorithm comprises a Support Vector Machine (SVM), Multilayer Perceptron, and/or random forest.

22. A method of classifying data for medical or clinical purposes, comprising the steps of:

receiving unlabelled input data; the input data comprising data from one of a plurality of modalities of data;

encoding the unlabelled input data into a trained joint representation using a trained encoder; and

performing classification using an learned classification algorithm on the trained joint representation to generate labelled data from the trained joint representation.

23. A method of classifying data for medical or clinical purposes according to claim 13, wherein one modality of input data is provided.

A method of classifying data for medical or clinical purposes according to claims 13 or 14, wherein the step of performing classification further comprises generating an

indication of one or more regions of interest in the unlabelled data.

25. A method of classifying data for medical or clinical purposes according to any of claims 13 to 15, wherein the one or more regions of interest are indicative of a cancerous growth.

26. A method of classifying data for medical or clinical purposes according to any preceding claim, wherein the number of modalities of unlabelled input data is one fewer than the number of modalities of input data used to train the learned classification algorithm.

27. A method of training a classifier and/or classifying data for medical or clinical purposes according to any preceding claim, wherein any encoders or decoders used are CNNs, including any of: VGG, and/or AlexNet; and/or RNNs, optionally including bidirectional LSTM with 512 hidden units.

28. An apparatus for training a classifier and/or classifying data for medical or clinical purposes respectively using the respective method of any preceding claim.