이 애플리케이션의 일부 콘텐츠는 현재 사용할 수 없습니다.
이 상황이 계속되면 다음 주소로 문의하십시오피드백 및 연락
1. (WO2019067903) HEAD POSE ESTIMATION FROM LOCAL EYE REGION
유의사항: 이 문서는 자동 광학문자판독장치(OCR)로 처리된 텍스트입니다. 법률상의 용도로 사용하고자 하는 경우 PDF 버전을 사용하십시오

WHAT IS CLAIMED IS:

1. A sensor device, comprising:

an image sensor;

a memory configured to store computer-executable instructions; and a processor in communication with the memory and configured to execute the computer-executable instructions to at least:

access gaze information corresponding to at least one user eye;

determine that the at least one user eye is in a gaze state based at least in part on the gaze information;

in response determining that the at least one eye is in the gaze state, access image data from the image sensor, the image data corresponding to at least an eye region that includes the at least one eye in the gaze state; and

determine head pose information based at least in part on the image data corresponding to the eye region, the head pose information defining at least a position of a user head and an orientation of the user head.

2. The sensor device of claim 1, wherein determining the head pose information is based at least in part on detecting one or more user eye features of the at least one eye based at least in part on the image data.

3. The sensor device of any of claims 1-2, wherein the gaze state corresponds to at least one of a gaze point of the at least one eye or a gaze vector of the at least one eye.

4. The sensor device of any of claims 1-3, wherein determining the head pose information includes using an artificial intelligence technique to evaluate the image data corresponding to the eye region.

5. The sensor device of any of claims 1-4, wherein the artificial intelligence technique comprises a machine learning technique based at least in part on a feature vector that corresponds to eye features.

6. The sensor device of any of claims 1-4, wherein the artificial intelligence technique comprises a deep learning system that receives the image data as input and outputs the head pose information.

7. The sensor device of any of claims 1-5, wherein determining the head pose information based at least in part on the image data corresponding to the eye region comprises determining the head pose information based at least in part a particular portion of the image data that excludes a user nose tip and user ears.

8. The sensor device of any of claims 1-7, wherein:

the eye region, for the at least one eye, comprises eye features comprising one or more of an iris, a pupil, a sclera, an upper eyelid, a lower eyelid, an inner canthus, or an outer canthus; and

determining the head pose information is further based at least in part on a portion of the image data that corresponds to the eye features.

9. The sensor device of any of claims 1-8, wherein:

the image data is first image data; and

the processor is further configured to access the computer-executable instructions to at least:

before accessing the gaze information, access second image data from the image sensor, the second image data corresponding to the eye region; and

determining the gaze information based at least in part on the second image data.

10. A computer-implemented method, comprising:

accessing an indication that at least one eye is in a gaze state;

in response to accessing the indication, accessing image data corresponding to an eye region that includes the at least one eye in the gaze state; and

determining head pose information based at least in part on the image data corresponding to the eye region.

11. The computer-implemented method of claim 10, wherein the head pose information defines at least a head position and a head orientation.

12. The computer-implemented method of claim 11, wherein the head position corresponds to a spatial position within a first three-dimensional space and the head orientation corresponds to a spatial orientation within a second three-dimensional space.

13. The computer-implemented method of any of claims 10-12, further comprising navigating on a user device based at least in part on the head pose information.

14. The computer-implemented method of any of claims 10-13, further comprising, prior to accessing the indication that the at least one eye is in the gaze state:

accessing other image data corresponding to the eye region;

determining gaze information corresponding to the at least one eye based on least in part on the other image data; and

determining the indication that the at least one eye is in the gaze state based at least in part on the gaze information.

15. The computer-implemented method of any of claims 10-14, wherein determining the head pose information is based at least in part on detecting one or more eye features of the at least one eye based at least in part on the image data, the one or more eye features comprising one or more of an iris, a pupil, a sclera, an upper eyelid, a lower eyelid, an inner canthus, or an outer canthus.

16. The computer-implemented method of any of claims 10-14, wherein determining the head pose information comprises:

determining a yaw component of a head pose based at least in part on an inner canthus and an outer canthus of the at least one eye represented by the image data;

determining a roll component of the head pose based at least in part on an angle of a line extended between the inner canthus and the outer canthus of the at least one eye; and

determining a pitch component of the head pose based at least in part on one or more of an upper eyelid or a lower eyelid of the at least one eye.

17. One or more non-transitory computer-readable storage devices storing computer-executable instructions that, when executed by one or more computer systems, cause the one or more computer systems to perform operations comprising:

instructing collection of image data corresponding to an eye region that includes at least one eye of a user;

inputting the image data into an artificial intelligence model; and

determining at least one of head position or a head orientation based at least in part on output from the artificial intelligence model.

18. The one or more non-transitory computer-readable storage devices of claim 17, wherein the artificial intelligence model is trained using annotated image data corresponding to eye regions obtained with a known gaze position, a known head position, and a known head pose.

19. The one or more non-transitory computer-readable storage devices of any of claims 17-18, wherein the image data excludes data corresponding to facial features disposed outside the eye region.

20. The one or more non-transitory computer-readable storage devices of any of claims 17-19, wherein the operations further comprise determining at least one of a gaze point or a gaze direction based at least in part on the output from the artificial intelligence model.