Processing

Please wait...

PATENTSCOPE will be unavailable a few hours for maintenance reason on Tuesday 26.10.2021 at 12:00 PM CEST
Settings

Settings

Goto Application

1. WO2020141330 - MID-AIR HAPTIC TEXTURES

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

We claim:

1. A method for generating mid-air haptic textures comprising:

importing a texture image having macro texture features Mi and micro texture features mi by an offline component;

extracting, by the offline component, Mi and mi from the texture image and applying haptic mapping to sensation si where H[si] = f(si, mi, Mi) to generate an offline component output;

on a iterative basis:

a) detecting, by a mid-air tracking system, collision points between a moving human hand and a virtual object xi;

b) calculating, by the mid-air tracking system, the center of mass of the collision points X=m(xi);

c) establishing, by the mid-air tracking system, projecting H[si] to X depending in part on the offline component output.

2. The method as in claim 1, wherein the center of mass of the collision points is not on the human hand.

3. The method as in claim 1, wherein extracting Mi and mi from the texture image uses a displacement map.

4. The method as in claim 3, wherein extracting Mi and mi from the texture image avoids possible sensory conflicts due to inconsistency between the mid-air haptic texture and its visual representation.

5. The method as in claim 3, wherein mi is calculated using a 2D autocorrelation function for a high-resolution image.

6. The method as in clam 5, wherein the 2D autocorrelation function establishes whether an image contains periodic features.

7. The method as in claim 5, wherein the 2D autocorrelation function is obtained by taking a discrete Fourier transform of the texture image and multiplying each coefficient with its complex conjugate before taking an inverse discrete Fourier transform.

8. The method as in claim 5, wherein the 2D autocorrelation function is obtained by a power spectral density function that measures energy at each spatial scale.

9. The method as in claim 3, wherein Mi is calculated by setting a circle haptic intensity to be proportional to the displacement map value at DM(X).

10. The method as in claim 9, wherein strength of a haptic sensation is modulated between a minimum value corresponding to a vibrotactile threshold and a maximum output power of a mid-air haptic generator.

11. A method of rendering roughness in visuo-haptic mid-air textures, comprising:

(1) forming a haptic prediction model by:

(a) training a visual texture dimension machine learning prediction and

classification model;

(b) calculating a linear regression process to match visual texture dimension prediction values to validated specific haptic sensation attributes to produce a particular haptic texture dimension;

(2) using the prediction and classification model for the visual texture dimension prediction values, developing an audio database and an associated rendering method to produce dynamic auditory feedback that is tied to visual features within a texture image.

12. The method of claim 11, wherein training a visual texture dimension machine learning prediction and classification model comprises obtaining image texture data based on the subjective observations of a plurality of test subjects.

13. The method of claim 12, further comprising:

converting the image texture data to grayscale using constant scaling.

14. The method of claim 13, further comprising:

calculating gray-level co-occurrence matrices for a plurality of displacement vectors.

15. The method of claim 14, further comprising:

calculating matrices for correct distance values and pre-determined angles;

summing and averaging transpose matrices to produce a symmetric and semi-direction invariant matrix that is normalized so as to contain estimated

probabilities for each pixel co-occurrence.

16. The method of claim 11, wherein calculating a linear regression process uses a network with convolutional layers with Rectified Linear Unit activations and He normal kernel initializers.

17. The method of claim 16, further comprising generating an output layer using a sigmoid activation function to output a predicted value of subjective roughness.

18. The method of claim 11, wherein using the prediction and classification model for the visual texture dimension prediction values comprises converting a prediction value to a draw frequency of a haptic sensation using a linear regression approach.

19. The method of claim 11, wherein dynamic auditory feedback that is tied to visual features within a texture image includes using audio intensity and frequency modulation based on variations in local features contained within the image texture.

20. The method of claim 19, further comprising:

rendering the dynamic auditory feedback via parametric audio.