Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020222873 - INTENDED INPUT TO A USER INTERFACE FROM DETECTED GESTURE POSITIONS

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

What is claimed is:

1. A computing device (102) comprising:

a user interface (122) having a display (124);

a position-detection mechanism (126);

a processor (114); and

a computer-readable media (118) having executable instructions of a position-detection manager (120) that, when executed by the processor (114), direct the computing device (102) to:

detect, through the position-detection mechanism (126), positions (210, 212, 214) associated with a gesture, the gesture made by a user of the computing device (102) relative to the user interface (122);

associate, to the detected positions (210, 212, 214), a timing profile (300); determine, using the associated timing profile (300), an intended input by the user; and

performing an operation corresponding to the determined, intended input.

2. The computing device of claim 1, wherein the timing profile identifies a static position associated with the gesture and determining the intended input is based, in part, on the identified, static position.

3. The computing device of claim 1 or claim 2, wherein:

the computing device detects, using a sensor of the computing device, a context associated to the detected positions, wherein the sensor is a GPS sensor, a proximity sensor, an accelerometer, a radar sensor, a radio-frequency identification (RFID) sensor, or a near-field communication (NFC) sensor; and

the computing device further uses the detected context to determine the intended input.

4. The computing device of any of claims 1-3, wherein the position-detection mechanism includes:

an input mechanism that is a mouse, a mouse-scrolling wheel, a touchpad, a touchpad button, or a touchpad joystick; and

determining the intended input includes using a gauge capability of the mouse, the mouse-scrolling wheel, the touchpad, the touchpad button, or the touchpad joystick.

5. The computing device of any of claims 1-3, wherein the position-detection mechanism includes an input mechanism that is a radar sensor and determining the intended input includes using a gauge capability of the radar sensor.

6. The computing device of any of claims 1-3, wherein the position-detection mechanism includes an input mechanism that is an image sensor and determining the intended input includes using a gauge capability of the image sensor.

7. The computing device of any of claims 1-3, wherein the user interface including the display combines with the position-detection mechanism to form a touchscreen.

8. The computing device of claim 7, wherein the position-detection mechanism uses a capacitive, resistive, reflective, or grid-interruption technology and determining the intended input includes using a gauge capability of the position-detection mechanism for the capacitive, reflective, or grid interruption technology.

9. The computing device of any preceding claim, wherein:

the computing device includes a sensor that detects a condition surrounding the computing device;

the computing device uses the detected condition to determine a context that is associated to the detected positions; and

the computing device further uses the determined context to determine the intended input.

10. The computing device of claim 9, wherein the detected condition is a location of the computing device.

11. The computing device of claim 9, wherein the detected condition is an identity of a user of the computing device.

12. A method performed by a computing device (102) comprising: detecting positions (210, 212, 214) associated with a gesture, the gesture made by a user of the computing device (102) and the gesture made relative to a touchscreen of the computing device;

associating, to the detected positions (210, 212, 214), a timing profile (300);

determining, based on the detected positions (210, 212, 214) and the timing profile (300), an intended input by the user; and

performing an operation corresponding to the determined, intended input.

13. The method of claim 12, wherein associating the timing profile to the detected positions identifies a static position associated with a portion of the gesture.

14. The method of claim 13, wherein the determined, intended input corresponds to the identified, static position.

15. The method of claim 12, wherein associating the timing profile to the detected positions identifies a motion vector associated with the gesture.

16. The method of claim 15, wherein the determined, intended input corresponds to a weighted average of ones of the detected positions associated with a low relative-velocity portion of the motion vector.

17. A method performed by a computing device comprising:

detecting positions associated with a gesture, the gesture made by a user of the computing device and the positions detected relative to a touchscreen of the computing device;

associating, to the detected positions, a context;

determining, based on the detected positions and the associated context, an intended input by the user; and

performing an operation corresponding to the determined, intended input.

18. The method of claim 17, wherein determining the intended input uses a machine-learning algorithm executed by a processor of the computing device, wherein the machine-learning algorithm accounts for variables that include a past behavior of the user, a location of the computing device, or a time of day.

19. The method of claim 18, wherein the machine-learning algorithm adheres to a neural network model.

20. The method of any of claims 17-19, wherein performing the operation that corresponds to the determined, intended input includes providing the input to an application that launches the application, selects a variable presented by the application, or terminates the application, and wherein the application is accessed through a cloud-based service provider.