Processing

Please wait...

PATENTSCOPE will be unavailable a few hours for maintenance reason on Saturday 31.10.2020 at 7:00 AM CET
Settings

Settings

Goto Application

1. US20190221044 - DISPLAY SYSTEM HAVING SENSORS

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

Claims

1. A system, comprising:
a controller comprising one or more processors; and
a head-mounted display (HMD) configured to display a 3D virtual view to a user, wherein the HMD comprises:
left and right displays for displaying frames including left and right images to the user's eyes to provide the 3D virtual view to the user;
a plurality of sensors configured to collect information about the user and the user's environment and provide the information to the controller, wherein the plurality of sensors includes:
one or more cameras configured to capture views of the user's environment;
one or more world mapping sensors configured to determine range information for objects in the environment; and
one or more eye tracking sensors configured to track position and movement of the user's eyes;
wherein the controller is configured to render frames for display by the HMD that include virtual content composited into the captured views of the user's environment based at least in part on the range information from the one or more world mapping sensors and the position and movement of the user's eyes as tracked by the one or more eye tracking sensors.
2. The system as recited in claim 1, wherein the controller is configured to determine depths at which to render the virtual content in the 3D virtual view based at least in part on the range information from the one or more world mapping sensors.
3. The system as recited in claim 1, wherein the controller is configured to:
determine a region within the 3D virtual view at which the user is looking based on the position of the user's eyes as determined by the one or more eye tracking sensors; and
render content in the determined region at a higher resolution than in other regions of the 3D virtual view.
4. The system as recited in claim 1, wherein the plurality of sensors further includes:
one or more head pose sensors configured to capture information about the user's position, orientation, and motion in the environment;
one or more light sensors configured to capture lighting information including color, intensity, and direction in the user's environment;
one or more hand sensors configured to track position, movement, and gestures of the user's hands;
one or more eyebrow sensors configured to track expressions of the user's eyebrows; and
one or more lower jaw sensors configured to track expressions of the user's mouth and jaw.
5. The system as recited in claim 4, wherein the controller is configured to render lighting effects for the virtual content based at least in part on the lighting information captured by the one or more light sensors.
6. The system as recited in claim 4, wherein the HMD further comprises an inertial-measurement unit (IMU), wherein the controller is configured to:
augment information received from the IMU with the information captured by the one or more head pose sensors to determine current position, orientation, and motion of the user in the environment; and
render the frames for display by the HMD based at least in part on the determined current position, orientation, and motion of the user.
7. The system as recited in claim 4, wherein the controller is configured to render an avatar of the user's face for display in the 3D virtual view based at least in part on information collected by the one or more eye tracking sensors, the one or more eyebrow sensors, and the one or more lower jaw sensors.
8. The system as recited in claim 4, wherein the controller is configured to render representations of the user's hands for display in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
9. The system as recited in claim 4, wherein the controller is configured to detect interactions of the user with virtual content in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
10. The system as recited in claim 1, wherein one or more cameras configured to capture views of the user's environment include a left video camera corresponding to the user's left eye and a right video camera corresponding to the user's right eye.
11. A device, comprising:
a controller comprising one or more processors;
left and right displays for displaying frames including left and right images to the user's eyes to provide a 3D virtual view to a user;
a plurality of world-facing sensors configured to collect information about the user's environment and provide the information to the controller, wherein the plurality of world-facing sensors includes:
one or more cameras configured to capture views of the user's environment;
one or more world mapping sensors configured to capture depth information in the user's environment;
a plurality of user-facing sensors configured to collect information about the user and provide the information to the controller, wherein the plurality of user-facing sensors includes one or more eye tracking sensors configured to track position and movement of the user's eyes;
wherein the controller is configured to render frames for display that include virtual content composited into the captured views of the user's environment based at least in part on the depth information captured by the one or more world mapping sensors and the position and movement of the user's eyes as tracked by the one or more eye tracking sensors.
12. The device as recited in claim 11, wherein the plurality of world-facing sensors further includes:
one or more head pose sensors configured to capture information about the user's position, orientation, and motion in the environment; and
one or more light sensors configured to capture lighting information including color, intensity, and direction in the user's environment.
13. The device as recited in claim 12, wherein the controller is configured to render lighting effects for the virtual content based at least in part on the lighting information captured by the one or more light sensors.
14. The device as recited in claim 12, wherein the device further comprises an inertial-measurement unit (IMU), wherein the controller is configured to augment information received from the IMU with the information captured by the one or more head pose sensors to determine current position, orientation, and motion of the user in the environment.
15. The device as recited in claim 11, wherein the plurality of user-facing sensors further includes:
one or more hand sensors configured to track position, movement, and gestures of the user's hands;
one or more eyebrow sensors configured to track expressions of the user's eyebrows; and
one or more lower jaw sensors configured to track expressions of the user's mouth and jaw.
16. The device as recited in claim 15, wherein the controller is configured to render an avatar of the user for display in the 3D virtual view based at least in part on information collected by the one or more eye tracking sensors, the one or more eyebrow sensors, the one or more lower jaw sensors, and the one or more hand sensors.
17. The device as recited in claim 15, wherein the controller is configured to detect interactions of the user with virtual content in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
18. A method, comprising:
capturing, by a plurality of world-facing sensors of a head-mounted display (HMD) worn by a user, information about the user's environment, wherein the information about the user's environment includes views of the user's environment and depth information in the user's environment;
capturing, by a plurality of user-facing sensors of the HMD, information about the user, wherein the information about the user includes position and movement of the user's eyes;
rendering, by a controller of the HMD, frames for display that include virtual content composited into the captured views of the user's environment based at least in part on the depth information captured by world-facing sensors and the position and movement of the user's eyes captured by the user-facing sensors; and
displaying, by the HMD, the rendered frames to the user to provide a 3D virtual view of the user's environment that includes the virtual content.
19. The method as recited in claim 18, further comprising:
capturing, by the world-facing sensors, information about the user's position, orientation, and motion in the environment and lighting information including color, intensity, and direction in the user's environment;
determining, by the controller, current position, orientation, and motion of the user in the environment based at least in part on the information about the user's position, orientation, and motion in the environment captured by the world-facing sensors; and
rendering, by the controller, lighting effects for the virtual content based at least in part on the lighting information captured by the world-facing sensors.
20. The method as recited in claim 18, further comprising:
tracking, by the user-facing sensors, position, movement, and gestures of the user's hands, expressions of the user's eyebrows, and expressions of the user's mouth and jaw; and
rendering, by the controller, an avatar of the user for display in the 3D virtual view based at least in part on information collected by the plurality of user-facing sensors.