Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2019048820) A METHOD OF MODIFYING AN IMAGE ON A COMPUTATIONAL DEVICE
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

Claims

1. A method of modifying an image on a computational device, the method comprising: providing image data representative of at least a portion of a three-dimensional scene, the scene being visible to a human observer from a viewing point when fixating on a visual fixation point within the scene;

displaying an image by rendering the image data on a display device;

computationally processing the image data so as to enclose each object of the three dimensional scene in a three dimensional detection region which is configured to identify coincidence of the respective object with the visual fixation point;

capturing user input by user input capturing means, wherein the capturing comprises monitoring a point of gaze of a user so as to determine a spatial coordinate in the three dimensional scene, the coordinate representing a movable visual fixation point of the human observer;

modifying the image by:

computationally isolating a fixation region within the image, the fixation region being defined by a subset of image data representing an image object within the image, wherein the image object is associated with the visual fixation point;

spatially reconstructing the subset of image data to computationally expand the fixation region;

spatially reconstructing remaining image data relative to the subset of image data to computationally compress a peripheral region of the image relative to the fixation region in a progressive fashion as a function of a distance from the fixation region,

determining a distance between a head of the user and the display device;

computationally processing the image data so as to move the fixation region towards a centre of a display of the display device, wherein the fixation region represents the object enclosed by the respective detection region;

wherein the computational expansion of the fixation region and the computational compression of the peripheral region are modulated by the distance between the head of the user and the display device.

2. A method according to claim 1, wherein a size of the detection region can be adjusted.

3. A method according to any of the claims 1 to 2, wherein the detection region extends beyond boundaries of the respective object.

4. A method according to claim 3, wherein a detection sensitivity is defined for the detection region, the detection sensitivity defining an extent to which the respective object is identified as coinciding with the visual fixation point, wherein the detection region has the lowest detection sensitivity proximate boundaries of the detection region.

5. A method according to claim 4, wherein the detection sensitivity is inversely proportional to a distance from the boundaries of the respective object.

6. A method according to any preceding claim, wherein the steps of providing the image data and computationally processing the image data are performed before the step of displaying the image on the display device.

7. A method according to any preceding claim, further comprising a step of detecting a motion of the head of the user relative to the display device.

8. A method according to claim 7, further comprising a step of computationally moving the peripheral region relative to the fixation region in accordance with the motion of the head of the user so as to emulate a moving field of view of the human observer while maintaining a position of the visual fixation point.

9. A method according to any preceding claim, further comprising a step of detecting entry of the visual fixation point into the detection region.

10. A method according to any preceding claim wherein computationally isolating the fixation region comprises predicting an object within the three dimensional scene upon which the user will fixate based on a velocity value and direction of the movable fixation point.

11. A method according to claim 10, wherein the fixation region is moved toward the centre of the display at the same time as the fixation point moves towards the fixation region.

12. A method according to any preceding claim wherein the image data comprises three- dimensional computer generated data.

13. A computer system configured to implement steps of the method according to any preceding claim, the system comprising:

user input capturing means configured to capture user input;

a control unit configured to generate a processed image data based on the captured user input;

a display device configured to display the processed image data.

14. A system according to any of the claims 13, wherein:

the user input capturing means comprises a user motion sensor configured to capture motion of the user relative to the display device.

15. A system according to any of the claims 13 to 14, further comprising:

a graphics processor configured to process the image data so as to generate a modified image data.

16. A system according to any of the claims 13 to 15, further comprising:

a memory storage configured to store the image data and communicate the image data to the control unit or the graphics processor when present.