Processing

Please wait...

PATENTSCOPE will be unavailable a few hours for maintenance reason on Tuesday 27.07.2021 at 12:00 PM CEST
Settings

Settings

Goto Application

1. US10095228 - Systems and methods for mitigating vigilance decrement while maintaining readiness using augmented reality in a vehicle

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

Claims

1. A vigilance system for improving vigilance and readiness of an operator in an autonomous vehicle that includes an augmented reality (AR) display, comprising:
one or more processors;
a memory communicably coupled to the one or more processors and storing:
a vigilance module including instructions that when executed by the one or more processors cause the one or more processors to compute an engagement level of the operator according to operator state information monitored from at least one internal sensor of the autonomous vehicle to characterize an extent of vigilance decrement and readiness presently exhibited by the operator relative to operating characteristics of the autonomous vehicle including an external environment around the autonomous vehicle and at least semi-autonomous operation of the autonomous vehicle; and
a rendering module including instructions that when executed by the one or more processors cause the one or more processors to dynamically render, on the AR display, at least one graphical element that is a visualization based, at least in part, on sensor data about the external environment from at least one external sensor of the autonomous vehicle,
wherein the rendering module includes instructions to dynamically render the at least one graphical element by varying visual characteristics of the at least one graphical element as a function of the engagement level in order to improve the engagement level of the operator in regards to the readiness and the vigilance of the operator for i) a manual handover of control to the operator by the autonomous vehicle and ii) independent intervention by the operator to takeover control of the autonomous vehicle.
2. The vigilance system of claim 1, wherein the rendering module includes instructions to dynamically render the at least one graphical element including instructions to vary visual characteristics of the at least one graphical element based, at least in part, on changes in the engagement level over time, and
wherein the rendering module includes instructions to induce vigilance and readiness within the operator to improve operation of the autonomous vehicle to operate at least semi-autonomously by avoiding lapses in the vigilance and the readiness of the operator and ensuring the operator is prepared to control the autonomous vehicle when the at least semi-autonomous operation of the autonomous vehicle is insufficient.
3. The vigilance system of claim 1, further comprising:
a monitoring module including instructions that when executed by the one or more processors cause the one or more processors to collect, using the at least one internal sensor of the autonomous vehicle, the operator state information perceived about the operator, wherein the vigilance module includes instructions to compute the engagement level by characterizing the operator state information according to a vigilance model to indicate a measure of a present mental awareness of the operator in relation to the operating characteristics as a statistical likelihood, and
wherein the vigilance model is a statistical model that characterizes at least a likely vigilance decrement of the operator as a function of the operator state information according to the external environment.
4. The vigilance system of claim 1, wherein the rendering module includes instructions to dynamically render the at least one graphical element including instructions to render the at least one graphical element as a visualization of internal state information that indicates present decisions and situational awareness of how the autonomous vehicle is performing the at least semi-autonomous operation, and
wherein the rendering module includes instructions to dynamically render by varying the visual characteristics of the at least one graphical element including changing which graphical elements are displayed and a manner in which the graphical elements are displayed within the AR display.
5. The vigilance system of claim 4, wherein the rendering module includes instructions to change a manner in which the graphical elements are displayed including instructions to modify an intensity, a style, and a location within the AR display of the at least one graphical element to be apparent and distinct within a line-of-sight of the operator when the engagement level indicates that the vigilance and the readiness of the operator is diminishing.
6. The vigilance system of claim 1, wherein the rendering module includes instructions to dynamically render the at least one graphical element including instructions to generate the at least one graphical element as an overlay of the AR display using the sensor data about the external environment,
wherein the overlay is a visualization of the sensor data as perceived by at least one external sensor of the autonomous vehicle, and
wherein the at least one external sensor includes a Light Detection and Ranging (LiDAR) sensor.
7. The vigilance system of claim 1, wherein the at least semi-autonomous operation of the autonomous vehicle is according to at least a supervised autonomy standard as electronically controlled by an autonomous driving module of the autonomous vehicle,
wherein the vigilance module includes instructions to compute the engagement level including instructions to analyze dynamic vehicle data along with the operator state information to account for the external environment of the autonomous vehicle when computing the engagement level, and wherein the dynamic vehicle data includes at least telematics of the autonomous vehicle, and a current location of the autonomous vehicle.
8. The vigilance system of claim 1, wherein the vigilance module includes instructions to compute the engagement level including instructions to use at least gaze data from an eye-tracking camera that identifies a direction in which the operator is presently gazing to compute the engagement level.
9. A non-transitory computer-readable medium for improving vigilance and readiness of an operator in an autonomous vehicle that includes an augmented reality (AR) display and storing instructions that when executed by one or more processors cause the one or more processors to:
compute an engagement level of the operator according to operator state information monitored from at least one internal sensor of the autonomous vehicle to characterize an extent of vigilance decrement and readiness presently exhibited by the operator relative to operating characteristics of the autonomous vehicle including an external environment around the autonomous vehicle and at least semi-autonomous operation of the autonomous vehicle; and
dynamically render, on the AR display, at least one graphical element that is a visualization based, at least in part, on sensor data about the external environment from at least one external sensor of the autonomous vehicle,
wherein the instructions to dynamically render the at least one graphical element include instructions to vary visual characteristics of the at least one graphical element as a function of the engagement level in order to improve the engagement level of the operator in regards to the readiness and the vigilance of the operator for i) a manual handover of control to the operator by the autonomous vehicle and ii) independent intervention by the operator to takeover control of the autonomous vehicle.
10. The non-transitory computer-readable medium of claim 9, wherein the instructions to dynamically render the at least one graphical element include instructions to vary visual characteristics of the at least one graphical element based, at least in part, on changes in the engagement level over time, and
wherein the instructions to induce vigilance and readiness within the operator improve operation of the autonomous vehicle to operate at least semi-autonomously by avoiding lapses in the vigilance and the readiness of the operator and ensuring the operator is prepared to control the autonomous vehicle when the at least semi-autonomous operation of the autonomous vehicle is insufficient.
11. The non-transitory computer-readable medium of claim 9, further comprising:
instructions that when executed by the one or more processors cause the one or more processors to collect, using at least one internal sensor of the autonomous vehicle, operator state information perceived about the operator, wherein the instructions to compute the engagement level include instructions to characterize the operator state information according to a vigilance model to indicate a measure of a present mental awareness of the operator in relation to the operating characteristics as a statistical likelihood, and
wherein the vigilance model is a statistical model that characterizes at least a likely vigilance decrement of the operator as a function of the operator state information according to the external environment.
12. The non-transitory computer-readable medium of claim 9, wherein the instructions to dynamically render the at least one graphical element include instructions to render the at least one graphical element as a visualization of internal state information that indicates present decisions and situational awareness of how the autonomous vehicle is performing the at least semi-autonomous operation, and
wherein the instructions to dynamically render include instructions to vary the visual characteristics of the at least one graphical element including changing which graphical elements are displayed and a manner in which the graphical elements are displayed within the AR display.
13. The non-transitory computer-readable medium of claim 9, wherein the instructions to dynamically render the at least one graphical element includes instructions to generate the at least one graphical element as an overlay of the AR display using the sensor data about the external environment, wherein the overlay is a visualization of the sensor data as perceived by at least one external sensor of the autonomous vehicle, wherein the at least one external sensor includes a Light Detection and Ranging (LiDAR) sensor, and
wherein the at least semi-autonomous operation of the autonomous vehicle is according to at least a supervised autonomy standard as electronically controlled by an autonomous driving module of the autonomous vehicle.
14. A method of improving vigilance and readiness of an operator in an autonomous vehicle that includes an augmented reality (AR) display, comprising:
computing, using a processor of the autonomous vehicle, an engagement level of the operator according to operator state information monitored from at least one internal sensor of the autonomous vehicle to characterize an extent of vigilance decrement and readiness presently exhibited by the operator relative to operating characteristics of the autonomous vehicle including an external environment around the autonomous vehicle and at least semi-autonomous operation of the autonomous vehicle; and
dynamically rendering, by the processor on the AR display, at least one graphical element that is a visualization based, at least in part, on sensor data about the external environment from at least one external sensor of the autonomous vehicle,
wherein dynamically rendering the at least one graphical element includes varying visual characteristics of the at least one graphical element as a function of the engagement level in order to improve the engagement level of the operator in regards to the readiness and the vigilance of the operator for i) a manual handover of control to the operator by the autonomous vehicle and ii) independent intervention by the operator to takeover control of the autonomous vehicle.
15. The method of claim 14, wherein dynamically rendering the at least one graphical element includes varying visual characteristics of the at least one graphical element based, at least in part, on changes in the engagement level over time, and
wherein dynamically rendering the at least one graphical element induces vigilance and readiness within the operator improving operation of the autonomous vehicle to operate at least semi-autonomously by avoiding lapses in the vigilance and the readiness of the operator and ensuring the operator is prepared to control the autonomous vehicle when the at least semi-autonomous operation of the autonomous vehicle is insufficient.
16. The method of claim 14, further comprising:
collecting, using the at least one internal sensor of the autonomous vehicle, the operator state information perceived about the operator, wherein computing the engagement level includes characterizing the operator state information according to a vigilance model to indicate a measure of a present mental awareness of the operator in relation to the operating characteristics as a statistical likelihood, and
wherein the vigilance model is a statistical model that characterizes at least a likely vigilance decrement of the operator as a function of the operator state information according to the external environment.
17. The method of claim 14, wherein dynamically rendering the at least one graphical element further includes rendering the at least one graphical element as a visualization of internal state information that indicates present decisions and situational awareness of how the autonomous vehicle is performing the at least semi-autonomous operation, and
wherein dynamically rendering by varying the visual characteristics of the at least one graphical element includes changing which graphical elements are displayed and a manner in which the graphical elements are displayed within the AR display.
18. The method of claim 17, wherein changing a manner in which the graphical elements are displayed includes modifying an intensity, a style, and a location within the AR display of the at least one graphical element to be apparent and distinct within a line-of-sight of the operator when the engagement level indicates that the vigilance and the readiness of the operator is diminishing.
19. The method of claim 14, wherein dynamically rendering the at least one graphical element includes generating the at least one graphical element as an overlay of the AR display using the sensor data about the external environment, wherein the overlay is a visualization of the sensor data as perceived by the at least one external sensor of the autonomous vehicle, and wherein the at least one external sensor includes a Light Detection and Ranging (LiDAR) sensor.
20. The method of claim 14, wherein the at least semi-autonomous operation of the autonomous vehicle is according to at least a supervised autonomy standard,
wherein computing the engagement level further includes analyzing dynamic vehicle data along with the operator state information to account for the external environment of the autonomous vehicle when computing the engagement level, wherein the dynamic vehicle data includes at least telematics of the autonomous vehicle, and a current location of the autonomous vehicle, and
wherein computing the engagement level includes collecting at least gaze data from an eye-tracking camera that identifies a direction in which the operator is presently gazing.