このアプリケーションの一部のコンテンツは現時点では利用できません。
このような状況が続く場合は、にお問い合わせくださいフィードバック & お問い合わせ
1. (WO2018106950) SYSTEMS AND METHODS FOR NAVIGATION IN IMAGE-GUIDED MEDICAL PROCEDURES
注意: このテキストは、OCR 処理によってテキスト化されたものです。法的な用途には PDF 版をご利用ください。

CLAIMS

we claim is:

A method comprising:

receiving, by a medical imaging system having at least one processing device, three- dimensional image data of a patient anatomy;

filtering the three-dimensional data to display a portion of the three-dimensional image data that is associated with the patient anatomy;

receiving, at the processing device, input from an operator input device, the input

comprising navigational directions for virtual movement within a space defined by the three-dimensional image data;

tracking the virtual movement; and

generating a first model of the patient anatomy based on the tracked virtual movement.

The method of claim 1, wherein the first model of the patient anatomy is a line model comprising one or more lines based on the tracked virtual movement.

The method of claim 2, wherein the first model includes a surface model generated from the line model.

The method of claim 1, wherein the first model of the patient anatomy is a surface model.

The method of claim 1, wherein the three-dimensional image data is CT image data, and filtering the CT image data includes filtering according to Hounsfield values to identify an anatomical passageway in the CT image data.

The method of claim 5, wherein filtering the CT image data comprises: applying a first filter to the CT image data to generate a first set of filtered CT image data, the first filter having a first threshold; and applying a second filter to the CT image data to generate a second set of filtered CT image data, the second filter having a second threshold.

The method of claim 6, wherein the second threshold is an adaptive airway threshold.

The method of claim 6, further comprising:

receiving a display filter selection from the operator input device, the display filter

selection associated with the first set of filtered CT image data; and

turning on or turning off rendering of the first set of filtered CT image data.

The method of claim 8, wherein the first set of filtered CT image data is associated with air contained within at least one bronchial passageway, blood vessels disposed around the at least one bronchial passageway in the patient anatomy, or walls defining the at least one bronchial passageway.

The method of claim 1, further comprising receiving input from the operator input device to filter the three-dimensional image data based on a radiodensity value associated with each voxel in the three-dimensional image data.

The method of any of claims 1, or 2 - 10, further comprising identifying a target in the three-dimensional image data.

The method of claim 11, further comprising:

determining a vector extending between a viewpoint of the three-dimensional image data in space and the target; and

rendering a user interface element representing the vector on a display displaying a

rendering of the three-dimensional image data to provide navigational guidance.

The method of claim 11, further comprising:

determining a first subset of modeled passageways that permit access to the target; and rendering a first user interface element indicating that a first modeled passageway is excluded from the first subset of modeled passageways.

The method of claim 11, further comprising:

determining a first subset of modeled passageways that permit access to the target; and rendering a first user interface element indicating that a first modeled passageway is included within the first subset of modeled passageways..

15. The method of claim 14, further comprising:

blocking virtual navigation beyond a location in the three-dimensional image data associated with the rendered first user interface element.

16. The method of any of claims 1, or 2-14, further comprising:

determining a second subset of modeled passageways that lead away from the target; and rendering a second user interface element indicating that a second modeled passageway is included within the second subset of modeled passageways.

17. The method of any of claims 1, or 2-10, further comprising:

segmenting the three-dimensional image data to generate a second model of a first

portion of an anatomical passageway,

wherein generating the first model is further based on the tracked virtual movement within a second portion of the anatomical passageway; and combining the first model with the second model to generate a combined model that includes the first and second portions of the anatomical passageway.

18. The method of claim 17, wherein the first model and the second model comprise a surface model, or the first model and the second model comprise a line model.

19. The method of claim 17, further comprising selectively rendering in a display: the first model; a filtered portion of the three-dimensional image data; or both the first model and the portion of the three-dimensional image data.

20. A method comprising:

receiving, by a medical imaging system having at least one processing device, three- dimensional image data of a patient anatomy;

segmenting the three-dimensional image data to generate an anatomical model from the three-dimensional image data;

receiving, at the processing device, input from an operator input device, the input

defining a pathway model within an image space defined by the three- dimensional image data and associated with an anatomical passageway of the patient anatomy; and

generating a hybrid model of the patient anatomy from the pathway model and from the anatomical model.

21. The method of claim 20 wherein the anatomical model is a line model.

22. The method of claim 20 wherein the anatomical model is a surface model.

23. The method of claim 20 determining a termination point of the anatomical model wherein the pathway model is defined beginning from the termination point.

24. A method of facilitating an image-guided medical procedure, the method comprising: receiving, by a teleoperational medical system having at least one processing device, three-dimensional image data of at least a portion of patient anatomy;

registering a medical instrument coupled to the teleoperational medical system with the three dimensional data by registering the medical instrument to a surgical environment and registering the three-dimensional image data with the surgical environment;

applying a radiodensity filter to the three-dimensional image data to alter a rendering of one or more voxels of the three-dimensional image data; and

rendering the three-dimensional image data in a display from a perspective associated with the medical instrument.

The method of claim 24, wherein applying the radiodensity filter to the three-dimensional image data causes voxels having a first radiodensity value to be rendered transparently.

The method of claim 24, further comprising assigning voxels in the three-dimensional image data to one of a plurality of tissue types.

The method of claim 26, further comprising:

receiving a selection of a tissue type of the plurality of tissue types via a tissue type

selection element included in a user interface;

receiving a transparent setting associated with the tissue type selection element; and applying the transparent setting to the voxels assigned to the selected tissue type.

A system for processing medical images, the system comprising:

a memory storing a set of three-dimensional image data of at least a portion of patient anatomy;

a processing device in communication with the memory, the processing device

configured to execute instructions to perform operations comprising:

receiving three-dimensional image data of a patient anatomy;

filtering the three-dimensional image data;

generating a display a portion of the three-dimensional image data associated with the patient anatomy;

receiving input from an operator input device, the input comprising navigational directions for virtual movement within an image space defined by the portion of the three-dimensional image data;

tracking the virtual movement; and

generating a model of the patient anatomy based on the tracked virtual movement.

The system of claim 28, wherein receiving the input from the operator input device comprises:

receiving a first input associated with virtual movement in a first perspective of the three- dimensional image data;

receiving a second input associated with virtual movement in a second perspective of the three-dimensional image data; and

combining the first and second inputs associated with the first and second perspectives to generate the model.

30. The system of claim 28 or 29, wherein the operator input device is a three-dimensional input device configured to process operator motion in three dimensions into the model.

31. The system of claim 28 or 29 wherein the processing device is further configured to execute instructions to perform rendering of a graphical user interface in a display in communication with the processing device.

32. The system of claim 31, wherein rendering the graphical user interface comprises

rendering the filtered three-dimensional image data from a perspective internal to the three-dimensional image data or from a perspective external to the three-dimensional image data.

33. The system of claim 31, wherein rendering the graphical user interface comprises

rendering the filtered three-dimensional image data from a perspective external to the three-dimensional image data.

34. The system of claim 33, wherein receiving input from the operator input device

comprises receiving one or more drawing inputs from the operator input device, the one or more drawing inputs representing one or more three-dimensional lines drawn on one or more views of the three-dimensional image data by an operator.

35. The system of claim 31, wherein the operations further comprise displaying a plurality of filter selections and receiving a selection of at least one of the plurality of filter selections, wherein rendering the graphical user interface comprises rendering a filtered portion of the three-dimensional image data according to the selected at least one of the plurality of filter selections.

The system of claim 31, wherein the operations further comprise receiving a selection of a user interface element from an operator, the selection indicating a virtual navigation input mode or a drawing input mode.

The system of claim 31, wherein the operations further comprise displaying the generated pathway and the three-dimensional image data in the display.

The system of claim 37, wherein the generated pathway and the three-dimensional image data are displayed simultaneously.