Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020205767 - METHODS AND APPARATUS FOR GESTURE DETECTION AND CLASSIFICATION

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

WHAT IS CLAIMED IS:

1. A system comprising:

a head-mounted device configured to present an artificial reality view to a user;

a control device comprising a plurality of electromyography (EMG) sensors comprising electrodes that contact the skin of the user when the control device is worn by the user;

at least one physical processor; and

physical memory including computer-executable instructions that, when executed by the physical processor, cause the physical processor to:

process one or more EMG signals as detected by the EMG sensors; classify the processed one or more EMG signals into one or more gesture types; provide control signals based on the gesture types, where the control signals trigger the head-mounted device to modify at least one aspect of the artificial reality view.

2. The system of claim 1, wherein the at least one physical processor is located within the control device.

3. The system of claim 1, wherein the at least one physical processor is located within the head mounted device, or within an external computer device in communication with the control device.

4. The system of claim 1, wherein the computer-executable instructions when executed by the physical processor, cause the physical processor to classify the processed EMG signals into one or more gesture types using a classifier model.

5. The system of claim 4, wherein the classifier model is trained using training data including a plurality of EMG training signals for the gesture type.

6. The system of claim 5, wherein the training data is obtained from a plurality of users.

7. The system of claim 1, wherein the head mounted device comprises a virtual reality headset or an augmented reality device.

8. A method comprising:

obtaining one or more electromyography (EMG) signals from a user;

processing the one or more EMG signals to generate associated feature data;

classifying the associated feature data into one or more gesture types using a classifier model; and

providing a control signal to an artificial reality (AR) device, based on the one or more gesture types,

wherein the classifier model is trained using training data including a plurality of EMG training signals for the one or more gesture types.

9. The method of claim 8, wherein the classifier model is trained by clustering feature data determined from EMG training signals.

10. The method of claim 8, wherein the classifier model is trained using EMG training signals obtained from a plurality of users.

11. The method of claim 10, wherein the plurality of users does not include the user.

12. The method of claim 8, further comprising training the classifier model by:

obtaining EMG training signals corresponding to a gesture type;

training the classifier model by clustering EMG training data obtained from the EMG training signals.

13. The method of claim 8, where in the classifier model is further trained by:

determining the time dependence of EMG training signals relative to a time of a respective EMG training signal maximum;

aligning the time dependence of a plurality of EMG training signals by adding a time offset to at least one EMG training signal of the plurality of EMG training signals;

obtaining a signal characteristic from the aligned plurality of EMG training signals; and training the classifier model to detect EMG signals having the signal characteristic.

14. The method of claim 8, wherein the classified model is further trained by:

obtaining training data including EMG training signals corresponding to a gesture type; and

averaging the EMG training signals corresponding to each occurrence of the gesture type to obtain a gesture model for the gesture type, wherein the classifier model uses the gesture model to classify EMG signals.

15. The method of claim 14, wherein the gesture model is a user-specific gesture model for the gesture type.

16. The method of claim 14, wherein the gesture model is a multiple user gesture model based on EMG training data obtained from a plurality of users,

the multiple user gesture model being a combination of a plurality of user-specific gesture models.

17. The method of claim 8, wherein the artificial reality device comprises a head-mounted device configured to present an artificial reality image to a user, the method further comprising modifying the artificial reality image based on the control signal.

18. The method of claim 17, wherein modifying the artificial reality image comprises selection or control of an object in the artificial reality image based on the gesture type.

19. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to:

receive one or more electromyography (EMG) signals as detected by EMG sensors;

process the one or more EMG signals to identify one or more features corresponding to a user gesture types;

use the one or more features to classify the one or more EMG signals into the gesture type;

provide a control signal based on the gesture type; and

transmit the control signals to a head-mounted device to trigger the modification of an artificial reality view in response to the control signals.

20. The non-transitory computer-readable medium of claim 19, wherein the computer device is configured to classify the EMG signals to the identify gesture type based on a gesture model determined from training data obtained from a plurality of users.