بعض محتويات هذا التطبيق غير متوفرة في الوقت الحالي.
إذا استمرت هذه الحالة ، يرجى الاتصال بنا علىتعليق وإتصال
1. (WO2019048269) SYSTEM FOR VENIPUNCTURE AND ARTERIAL LINE GUIDANCE WITH AUGMENTED REALITY
ملاحظة: نص مبني على عمليات التَعرف الضوئي على الحروف. الرجاء إستخدام صيغ PDF لقيمتها القانونية

SYSTEM FOR VENIPUNCTURE AND ARTERIAL LINE GUIDANCE WITH

AUGMENTED REALITY

FIELD

The following relates generally to the venipuncture arts, arterial line placement arts, nursing and patient care arts, augmented reality arts, and related arts.

BACKGROUND

Venipuncture and arterial line placement provide access to a patient's venous and arterial blood systems, respectively. Venipuncture is used for tasks such as drawing blood for testing or blood donation, administering intravenous (IV) fluids, and the like. Venipuncture is a very common medical procedure: by some estimates around one billion venipuncture procedures are performed each year. Arterial lines are used for drawing arterial blood gas (ABG) samples, direct arterial blood pressure monitoring, and the like. Venipuncture and arterial line placement are commonly performed by nurses, doctors, and other medical professionals. Accurate initial placement of the hypodermic needle or IV needle in venipuncture greatly improves patient experience by minimizing skin penetrations that can lead to pain and potential pathways for infection, and avoids delays and improves clinical workflow. However, by some estimates accurate placement on the first attempt is achieved less than half the time. Arterial line placement is a more difficult procedure due to the deeper location of arteries compared with veins, leading to increased pain and potential for injury in the case of repeated arterial line placement attempts.

Prior work has shown significant needle tip movement during venipuncture blood sample collection. Needle motion due to switching of hands while removing a first blood tube from the system and replacing with the second blood tube is common in vacuum venipuncture. Some complications that can arise from excessive movement during venipuncture include nerve damage, hematomas, and neuropathic pain (see, e.g., Fujii C. Clarification of the characteristics of needle-tip movement during vacuum venipuncture to improve safety. Vascular Health and Risk Management. 2013;9:381-390. doi: 10.2147/VHRM.S47490).

Improving accuracy of venipuncture would have a large societal impact -fulfilling a widespread unmet need. In the US annually, there are approximately 1 billion venipunctures for blood draws and IV therapy. In hospital settings, virtually all patients receive IVs, which can require multiple attempts to successfully attach to the patient. This results in wasted time for clinical staff, delay of diagnostic evaluation or treatment, and most notably significantly reduced patient satisfaction scores. Moreover, patient satisfaction is particularly important for hospital reimbursement with the Affordable Care Act, where Medicare payments will be withheld from hospitals with unacceptable patient satisfaction scores. This problem is even more acute when placing arterial lines. They can be harder to identify visually as they are deeper and errors can be dangerous, such as hematoma and nerve or vascular damage to the extremity.

Currently, a device call AccuVein® (available from Accuvein, Inc., Huntington, New York) has had success using IR to detect the vasculature via hemoglobin absorption. However, the device provides no guidance as to aspects such as target depth, target angle, and target speed which are important for accurate needle placement. The device also does not provide real-time feedback to the clinician during the needle insertion phase of the venipuncture procedure.

The following discloses a new and improved systems and methods that address the above referenced issues, and others.

SUMMARY

In one disclosed aspect, a needle placement assistance device for assisting in venipuncture or arterial line placement includes a stereo camera configured to acquire stereo images of a target portion of a patient. A needle tracker is configured to track a current position of an associated needle. The device also includes at least one electronic processor, and a non-transitory storage medium storing data related to one or more of target needle depth, target needle angle, and target needle speed, and instructions readable and executable by the at least one electronic processor to perform a needle placement assistance method. The method includes: performing machine vision processing of the stereo images to generate a three-dimensional (3D) map of the target portion; detect a blood vessel in the 3D map of the target portion; determining a target needle position relative to the blood vessel detected by the machine vision processing based on the data related to one or more of target depth, target angle, and target speed; and identifying corrective action to align a current position of the needle with the target needle position.

In another disclosed aspect, a needle placement assistance device for assisting in venipuncture or arterial line placement includes a stereo camera configured to acquire stereo images of a target portion of a patient. A needle tracker is configured to track a current position of an associated needle. A feedback mechanism is configured to present the corrective action

to a user during insertion of the needle into the detected blood vessel. The feedback mechanism includes at least an augmented-reality heads-up display (AR-HUD) device including an AR-HUD display. The stereo camera is mounted to the AR-HUD device. The device includes at least one electronic processor, and a non-transitory storage medium storing data related to one or more of target needle depth, target needle angle, and target needle speed, and instructions readable and executable by the at least one electronic processor to perform a needle placement assistance method. The method includes: performing machine vision processing of the stereo images to generate a three-dimensional (3D) map of the target portion; detect a blood vessel in the 3D map of the target portion; determining a target needle position relative to the blood vessel detected by the machine vision processing based on the data related to one or more of target depth, target angle, and target speed; and identifying corrective action to align a current position of the needle with the target needle position. The needle tracker comprises the at least one electronic processor and the non-transitory storage medium. The needle placement assistance method further includes performing needle tracking machine vision processing of the stereo images to determine the current position of the needle relative to the detected blood vessel

In another disclosed aspect, a needle placement assistance device for assisting in venipuncture or arterial line placement includes a stereo camera configured to acquire stereo images of a target portion of a patient. A needle tracker is configured to track a current position of an associated needle. A feedback mechanism is configured to present the corrective action to a user during insertion of the needle into the detected blood vessel. The device also includes at least one electronic processor, and a non-transitory storage medium storing data related to one or more of target needle depth, target needle angle, and target needle speed, and instructions readable and executable by the at least one electronic processor to perform a needle placement assistance method. The method includes: performing machine vision processing of the stereo images to generate a three-dimensional (3D) map of the target portion; detect a blood vessel in the 3D map of the target portion; determining a target needle position relative to the blood vessel detected by the machine vision processing based on the data related to one or more of target depth, target angle, and target speed; and identifying corrective action to align a current position of the needle with the target needle position. The feedback mechanism includes at least a speaker configured to provide audio instructions to a user presenting the corrective action.

One advantage resides in providing a device or method which improves the likelihood of successful first placement for venipuncture or arterial line placement procedures.

Another advantage resides in providing a device that provides real-time feedback for a medical professional during needle insertion into a blood vessel.

Another advantage resides in providing textual or graphic feedback to a medical professional to correct a position of a needle during needle insertion into a blood vessel.

Another advantage resides in providing an augmented reality device to correct a position of a needle during needle insertion into a blood vessel.

Another advantage resides in providing monitoring and real-time feedback as to compliance with a target depth during a venipuncture or arterial line placement procedure.

Another advantage resides in providing monitoring and real-time feedback as to compliance with a target angle during the needle insertion phase of a venipuncture or arterial line placement procedure.

Another advantage resides in providing monitoring and real-time feedback as to compliance with a target speed during the needle insertion phase of a venipuncture or arterial line placement procedure.

A given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

The disclosure may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure.

FIGURE 1 diagrammatically illustrates a needle placement assistance device in accordance with one aspect.

FIGURE 2 diagrammatically illustrates a flowchart of a method of use of the device of FIGURE 1.

FIGURES 3 and 4 diagrammatically illustrates image processing operations of the vasculature imaging process of FIGURE 2.

FIGURE 5 diagrammatically illustrates a feedback mechanism of the device of

FIGURE 1.

DETAILED DESCRIPTION

The following discloses a guidance approach for assisting in venipuncture or arterial line placement. While some existing devices such as Accuvein® assist in identifying a vein or artery to target, the following describes systems that go beyond this to provide guidance as to the location, depth, angle and/or speed of insertion, including real-time feedback provided to the clinician during the needle insertion phase of the procedure.

To this end, a stereo camera is provided to generate stereo images of the arm or other anatomical target. A blood vessel detector is applied to detect a suitable vein or artery. The stereo images are also analyzed using machine vision techniques to construct a three-dimensional (3D) map of the arm or other target. A knowledge base is referenced to determine the depth, angle, and optionally speed of insertion. These may depend on factors such as the type of operation being performed (e.g. venipuncture versus arterial line) and possibly information about the blood vessel being targeted in accord with the output of the blood vessel detector.

Thereafter, the stereo camera produces stereo video of the insertion process, and measures relevant parameters such as the needle tip location and angle and rate of positional change (i.e. speed). Optionally, the syringe may include one or more tracking markers to assist in tracking the needle.

Many seemingly static videos contain subtle changes that are invisible to the human eye. However, it is possible to measure these small changes via the use of algorithms such as Eulerian video magnification (see, e.g., Wu et al. Eulerian Video Magnification for Revealing Subtle Changes in the World. Sigggraph, 31 :4 (2012). Previously, it has been shown the human pulse can be measured from pulse-induced color variations or small motion of pulsatile flow with conventional videos, which is called remote photoplethysmography (rPPG) (see, e.g., Wang et al. Exploiting Spatial Redundancy of Image Sensor for Motion Robust rPPG. IEEE Trans on Biomed Eng. 62:2 (2014).

The system further includes a feedback output. In some embodiments, this is an augmented reality (AR) head-up display (HUD) with one or more see-through AR display(s). The provided feedback guidance may take various forms. In a straightforward approach, guidance may be displayed as AR content such as text or standard icons to indicate guidance such as whether the needle tip is positioned at the target blood vessel, advice such as "needle too shallow" or "target depth reached", or so forth. In another embodiment, graphics may be superimposed as AR content, e.g. showing the correct angle of the needle as a translucent line that terminates at the point on the target blood vessel where the needle should be inserted. Here, the clinician merely needs to align the physical needle with this translucent line.

For the AR-HUD design, the stereo camera should be mounted on the AR-HUD to provide a "first person view" so as to align the AR content with the actual view seen through the transparent display(s) of the AR-HUD. While an AR-HUD is preferred, other feedback mechanisms are contemplated. For example, a purely audio feedback could be provided. In another approach, the syringe itself could include feedback indicators, e.g. a green LED on the syringe could light up to indicate the tip is at the correct insertion point, and a level display could indicate if the needle is too shallow or too steep, or haptic feedback of various types. In these non-AR embodiments, the stereo camera could be mounted in a stationary location (e.g. at a blood lab station), and no HUD would be needed.

With reference to FIGURE 1 , an illustrative needle placement assistance device or system 10 is shown. The device 10 is configured to assist in placement of an associated needle 12 (e.g., the needle of an illustrative syringe 13, or of an arterial line, or of a vascular catheter, and the like) in a patient (and more particularly, into an artery or vein of the patient). To do so, the needle placement assistance device 10 includes a stereo camera 14 configured to acquire stereo images or videos of a target portion of a patient, which can be used in various subsequent operations (e.g., detecting a blood vessel in the patient, determining a needle position, and the like) and a needle tracker 16 configured to track a current position of an associated needle 12. The stereo camera 14 typically includes multiple lenses or lens assemblies with a separate sensor for each lens that forms an image on a digital detector array (e.g. a CCD imaging array, a CMOS imaging array, et cetera) to capture 3D images. The stereo camera 14 preferably has color video capability, e.g. by having an imaging array with pixels sensitive to red, green, and blue light (or another set of colors substantially spanning the visible spectrum, e.g. 400-700 nm). The stereo camera 14 optionally may include other typical features, such as a built-in flash (not shown) and/or an ambient light sensor (not shown) for setting exposure times.

The system 10 also includes a computer, workstation, mobile device (e.g., a cellular telephone, a tablet computer, personal data assistant or PDA, or so forth) or other electronic data processing device 18 with typical components, such as at least one electronic processor 20 operably connected to the needle tracker 16, at least one user input device (e.g., a mouse, a keyboard, a trackball, and/or the like) 22, and a display device 24. In some embodiments, the display device 24 can be a separate component from the computer 18, and can be an LCD display, an OLED display, a touch sensitive display, or the like. The

workstation 18 can also include one or more databases 26 (stored in a non-transitory storage medium such as RAM or ROM, a magnetic disk, an electronic medical record (EMR) database, a picture archiving and communication system (PACS) database, and the like). For example, video of venipuncture or arterial line placement procedures could be stored for later use in clinical training or for assessment of clinician skills. The computer 18 is connected to the stereo camera 14 and the needle tracker 16 via a wired or wireless communication link or network e.g., (Wi-Fi, 4G, or another wireless communication link).

The at least one electronic processor 20 is operatively connected with a non-transitory storage medium (not shown) that stores instructions which are readable and executable by the at least one electronic processor 20 to perform disclosed operations including performing a needle placement assistance method or process 100. The non-transitory storage medium 26 may, for example, comprise a hard disk drive, RAID, or other magnetic storage medium; a solid state drive, flash drive, electronically erasable read-only memory (EEROM) or other electronic memory; an optical disk or other optical storage; various combinations thereof; or so forth. In some examples, the needle placement assistance method or process 100 may be performed by cloud processing. The non-transitory storage medium 26 stores data related to one or more of target needle depth, target needle angle, and target needle speed. For example, the non-transitory storage medium 26 suitably stores data related to proper IV placement and arterial line placement, including appropriate needle 12 angles and depths for various vessel sizes, depths from the surface, branching, and trajectory of the needle for various venipuncture and/or arterial line procedures, optionally further segmented into different target depth/angle/speed settings for different blood vessels that may be targeted for such procedures. In general, venipuncture target depths are shallower than arterial line target depths due to the typically shallower veins compared with arteries; moreover, different veins may be generally located at different depths, and likewise for different arteries. Target needle speed is also usually higher for arteries compared with veins, and target angle may be steeper for arteries in order to more effectively achieve the desired deeper penetration.

In some embodiments, the needle tracker 16 can be any suitable sensor disposed to move with the needle 12, including one or more of: an accelerometer; a gyroscope; and an electromagnetic (EM) tracking system with an EM sensor. In other embodiments, the needle tracker 16 includes the at least one electronic processor 20 and the non-transitory storage medium 26. In further embodiments, the needle tracker 16 can also include one or more visual markers 30 that are disposed to move with the needle 12. The visual markers 30 can include one or more of Bokode visual tags, frequency modulated LEDS, and other suitable visual

markers. In the case of visual markers, the needle tracker 16 may be viewed as further including the stereo camera 14 which records video of the markers.

With reference to FIGURE 2, an illustrative embodiment of the needle placement assistance method 100 is diagrammatically shown as a flowchart. At 102, the workstation 18 receives a time sequence of stereo images (i.e. stereo video 103 of an illustrative wrist targeted for venipuncture, see FIGURE 1) from the stereo camera 14 and performs machine vision processing of the stereo images 103 to generate a three-dimensional (3D) map 32 of the target portion. To do so, the target portion of the patient (e.g., an arm) is detected and tracked. In a detection operation, a foreground and a background of an initial stereo image frame is removed using a fully convolutional neural network (FCN) that is trained for recognition and segmentation of the body part that is the subject of the venipuncture or arterial line placement procedure as a joint task. The body part is then detected using a Haar Cascade classifier. Features of the body part such as joints, boundaries, or so forth are detected using a mixture of experts per feature. Features may be tied together, e.g. the mixture of experts for one body part feature depends on a neighboring feature. In subsequent frames of the stereo video, confidence maps are produced also using sequential detection from a mixture of experts. However, they are constrained using optical flow from other frames. To compensate for this, the optical flow is weighted by the temporal distance between frames. To create the 3D map 32 (i.e., of the arm and hand), a segmentation superpixel is dilated to prevent over segmentation of regions of interest. Key points are extracted using a SIFT (scale-invariant feature transform) algorithm. These key points are used to create a geometrical mesh using non-uniform rational b-spline (NURBS). Once generated, the 3D map 32 is displayed on the display device 24. It should be noted that this is one illustrative 3D map segmentation process, and other machine vision processing approaches may be employed for this task.

At 104, the at least one electronic processor 20 is programmed to detect a blood vessel in the 3D map 32 of the target portion. With continuing reference to FIGURES 1 and 2, and with further reference to FIGURES 3 and 4, to detect a blood vessel in the 3D map 32, a two-step approach includes the first step 60, in which subtle signal variations in the 3D map 32 are amplified. FIGURE 3 illustrates one approach. In the example of FIGURE 3, the operation 60 employs amplification of motion variation as using the Eulerian Motion Magnification algorithm. See Eulerian Video Magnification. Wu et al, "Eulerian Video Magnification for Revealing Subtle Changes in the World", ACM Transactions on Graphics vol. 31 no. 4 (Proc. SIGGRAPH, 2012). The amplification of variations by this approach enhances variations not easily detected from the raw video. For example, as shown in FIGURE

3, after signal amplification a pixel 70 which is over vasculature has a sinusoidal signal 72 in the expected frequency range consistent with the arterial waveform (e.g. corresponding the cardiac cycle or pulse rate). Another pixel 74, which is not located over vasculature, has a signal 76 that does not have a frequency consistent with physiology.

With reference to FIGURE 4, in the second step 62, pixels consistent with vascular physiology are identified. In the illustrative example of FIGURE 4, the signal is decomposed into its frequency components via Fourier transform to produce frequency spectra 80. One method to extract information is to identify the peaks within the physiological feasible passband 82 (e.g. corresponding to the credible range of pulse rates for the patient, e.g. having a lower limit of 40 beats/min or some other lowest value that is realistic for a patient and an upper limit of 200 beats/min or some other highest value that is realistic for a patient) by windowing followed by a slope inversion and a local peak search. Other approaches can be employed to identify pixels representing vasculature based on the temporal variation of the values of the pixels produced by the computation 60.

The approach of FIGURES 3 and 4 is one illustrative example of blood vessel detection, and other approaches can be used, for example relying upon detection of edges of vessels in the image. In another variant, the stereo camera may include an infrared imager that provides thermal imaging of blood vessels. The infrared image is spatially aligned with the 3D map from the stereo camera to provide the blood vessel detection.

At 106, the at least one electronic processor 20 is programmed to determine a target needle position relative to the detected blood vessel based on the data related to one or more of target depth, target angle, and target speed (i.e., the data stored in the non-transitory storage medium 26). The target needle position can include one or more of the location of the needle (based on the target depth and target angle), a translational location in the 3D map 32, and target speed. From the target depth, target angle, and target speed data stored in the non-transitory storage medium 26 and the blood vessels and other vasculature detected the 3D map 32, the at least one electronic processor 20 is programmed to calculate an optimal location (e.g., x, y, and z coordinates), angle, and insertion depth of the needle 12 into the detected blood vessel on the 3D map 32. To determine the optimal location, the at least one electronic processor 20 is programmed to find branch points and determine vessel size in the 3D map 32, and use the data stored in the non-transitory storage medium 26 to calculate the target needle position. From the blood vessels and other vasculature detected the 3D map 32, the at least one electronic processor 20 is programmed to calculate a current optimal (i.e. target) location angle, and insertion depth of the needle 12 into the detected blood vessel on the 3D map 32.

At 108, the at least one electronic processor 20 is programmed to performing needle tracking machine vision processing of the stereo images to determine the current position of the needle 12 relative to the detected blood vessel. In some examples, the at least one electronic processor 20 is programmed to determine the current position of the needle 12 relative to the detected blood vessel by detecting the visual markers 30 connected to the needle in the 3D map 32. At least two such markers 30 are generally employed to determine the current angle of the needle. It may be noted that the markers are usually not able to be disposed on the needle 12 itself, but rather are disposed on a component that is in a rigid spatial relationship with the needle 12, such as the barrel of the illustrative syringe 13. To determine the current actual needle speed, the needle position in successive frames of the video are suitably determined to assess a distance and the time interval between the successive frames provides the time dimension of the speed (distance/time).

At 110, the at least one electronic processor 20 is programmed to identifying corrective action to align a current position of the needle from 108 with the target needle position from 106. This operation is performed when the determined current position of the needle 12 (determined at 108) is offset by more than some allowable tolerance from the determined target position of the needle (determined at 106). The corrective action can include moving the position, the angle, and/or the speed of the needle 12 to align with the determined target position, which can be displayed with the 3D map 32 on the display device 24, presented aurally via a loudspeaker using synthesized or prerecorded verbiage (e.g. speaking "Use a more shallow angle" or "Increase insertion speed" or so forth).

With continuing reference to FIGURES 1 and 2, and referring now to FIGURE 5, in some embodiments, the device 10 includes a feedback mechanism 34 configured to present a corrective action to a user during insertion of the needle 12 into a detected blood vessel. In some embodiments, the feedback mechanism 34 includes an augmented-reality heads-up display (AR-HUD) device 36 with one or more AR-HUD displays 40. The illustrative design employs left-eye and right-eye displays 40, but alternatively the display can be a single large window that spans both eyes. In some examples, the stereo camera 14 is mounted to the AR-HUD device 36 to provide a "first person view" so as to align the AR content with the actual view seen through the transparent display(s) 40 of the AR-HUD. In some examples, the AR-HUD device 36 can be configured as a helmet, a headband, glasses, goggles, or other suitable embodiment in order to be worn on the head of the user. The stereo camera 14 is mounted to the AR-HUD device 36 (e.g., to overlay the user's forehead, or including two stereo cameras disposed on lenses of the glasses).

To perform operation 110 using the AR-HUD approach of FIGURE 5, the AR-HUD 36 transmits the stereo images comprising a video stream from the stereo camera 14 to the processor 20 of FIGURE 1. After operations 104-110 are performed (i.e., the blood vessel, target needle position, and current needle position, and the correction are determined) at the processor 20 and AR content for correcting any needle insertion errors is generated, the AR content is transmitted back from the processor 20 to the AR-HUD 36 which displays the generated at least one of text and graphics (or, more generally, the AR content) on the AR HUD display 40. For example, text such as "needle too shallow" or "target depth reached", or so forth. In another example, graphics may be superimposed as AR content, e.g. showing the correct angle of the needle as a translucent line that terminates at the point on the target blood vessel where the needle should be inserted. Here, the clinician merely needs to align the physical needle with this translucent line. In one example, the graphics can include a "level" graphic to show the stability of the needle 12.

To generate the text or graphics aligned with or positioned proximate to an actually observed feature such as the blood vessel, the processor 20 is programmed to use Simultaneous Location and Mapping (SLAM) processing to align the AR content (e.g. the superimposed translucent target needle angle/position, or textual instruction annotations and/or so forth) with the recorded video. The SLAM processing may be formulated mathematically by the probabilistic formulation P(c1(t), c2 (t), ... , οη{ί), ρ{ΐ) \^, ... , ft) where Ci(t), c2 (t), ... , cn(t) are the locations of reference points of the needle 12 (or the connected barrel of the syringe 13) at a time t (where without loss of generality n reference points are assumed), p(t) is the location of the service person at time t, and f ,—, ft are the frames of the obtained video up to the time t. Various SLAM algorithms known for robotic vision mapping, self-driving vehicle navigation technology, or so forth are suitably applied to implement the AR content-to-video mapping.

Referring back to FIGURE 5, in another embodiment, the feedback mechanism 34 includes a speaker 42 configured to provide audio instructions to a user presenting the corrective action. In this embodiment, the text and graphics generated by the AR-HUD device 36 can be broadcast by the speaker 42.

In another embodiment, the feedback mechanism 34 includes one or more light emitting diodes (LEDs) 44, configured to illuminate to indicate the corrective action during insertion of the needle into the detected blood vessel. The illumination of the LEDs 44 can show the target position, target speed, and/or target angle of the needle 12. In one example, the corrective action includes an adjustment of an angle of the current position of the needle 12 to align with the target needle angle. In another example, the corrective action includes an adjustment of a rate of change of the current position of the needle 12 to align with the target needle speed.

In another embodiment, the feedback mechanism 34 includes a haptic device 46 built into or secured with the needle 12 (e.g. via the barrel of the syringe 13 in FIGURE 5) and configured to present the corrective action comprising haptic feedback to the user. For example, the haptic device 46 may vibrate when the needle 12 deviates from the target needle position during needle insertion.

The disclosure has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the disclosure be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.