Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2021061310 - AFFICHAGE DE REPRÉSENTATIONS D'ENVIRONNEMENTS

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

What is claimed is:

1. A method comprising:

at an electronic device including one or more processors, a non-transitory memory, one or more input devices, and a display device:

displaying, via the display device, a plurality of diorama-view representations from a corresponding plurality of viewing vectors, wherein the plurality of diorama- view representations corresponds to a plurality of enhanced reality (ER) environments, wherein each of the plurality of diorama-view representations is associated with a respective set of ER world coordinates that characterizes a respective ER environment, wherein the plurality of diorama-view representations includes a first one of the plurality of diorama-view representations displayed from a first viewing vector, and wherein the first one of the plurality of diorama-view representations includes a first one or more of ER objects arranged according to a first set of ER world coordinates;

detecting, via the one or more input devices, an input associated with the first one of the plurality of diorama-view representations; and

in response to detecting the input, changing display of the first one of the plurality of diorama-view representations from the first viewing vector to a second viewing vector while maintaining the first one or more ER objects arranged according to the first set of ER world coordinates.

2. The method of claim 1, wherein the input is directed to the first one of the plurality of diorama-view representations.

3. The method of claim 2, wherein the one or more input devices includes a hand tracking sensor, the method further comprising:

detecting the input via the hand tracking sensor;

obtaining hand tracking data from the hand tracking sensor based on the input; and determining, from the hand tracking data, that the input is directed to the first one of the plurality of diorama-view representations.

4. The method of claim 2, wherein the one or more input devices includes an eye tracking sensor, the method further comprising:

detecting the input via the eye tracking sensor;

obtaining eye tracking data from the eye tracking sensor based on the input; and determining, from the eye tracking data, that the input is directed to the first one of the plurality of diorama-view representations.

5. The method of any of claims 1-4, wherein the input corresponds to a change in position of the electronic device from a first pose to a second pose relative to the first one of the plurality of diorama-view representations.

6. The method of claim 5, wherein the one or more input devices includes a positional-change sensor, the method further comprising:

detecting the input via the positional-change sensor;

obtaining positional-change data from the positional-change sensor based on the input; and

determining the change in position of the electronic device based on the positional-change data.

7. The method of any of claims 1-6, wherein the input is also associated with a second one of the plurality of diorama-view representations, and wherein the second one of the plurality of diorama-view representations is displayed from a third viewing vector, and wherein the second one of the plurality of diorama-view representations includes a second one or more of ER objects arranged according to a second set of ER world coordinates, the method further comprising, in response to detecting the input, changing display of the second one of the plurality of diorama-view representations from the third viewing vector to a fourth viewing vector while maintaining the second one or more ER objects arranged according to the second set of ER world coordinates.

8. The method of any of claims 1-7, wherein the plurality of diorama- view representations corresponds to reduced-sized representations of the corresponding plurality of ER environments.

9. The method of any of claims 1-8, wherein the first one of the plurality of diorama-view representations corresponds to a virtual reality (VR) representation of a first ER environment.

10. The method of any of claims 1-8, wherein the first one of the plurality of diorama-view representations corresponds to an augmented reality (AR) representation of a first ER environment, and wherein the first one of the plurality of diorama-view representations includes AR content overlaid on environmental data that is associated with physical features of the first ER environment.

11. The method of any of claims 1-10, wherein displaying the plurality of diorama-view representations includes animating a portion of the plurality of diorama-view representations.

12. The method of any of claims 1-11, further comprising:

obtaining a plurality of characterization vectors that respectively provide a plurality of spatial characterizations of the corresponding plurality of ER environments, wherein each of the plurality of characterization vectors includes a plurality of object label values that respectively identify one or more ER objects, and wherein each of the plurality of characterization vectors also includes a plurality of relative position values providing respective positions of the one or more ER objects relative to each other; and generating, from the corresponding plurality of ER environments and the plurality of characterization vectors, the plurality of diorama-view representations of the corresponding plurality of ER environments according to the plurality of relative position values.

13. The method of claim 12, wherein generating the plurality of diorama- view representations includes scaling down the corresponding plurality of ER environments by a scaling factor.

14. The method of claim 13, further comprising obtaining, via the one or more input devices, a scaling request input that specifies the scaling factor.

15. The method of claim 13, wherein the electronic device includes an image sensor, the method further comprising:

obtaining, via the image sensor, pass-through image data bounded by a field-of-view of a physical environment associated with the image sensor;

identifying, within the pass-through image data, one or more physical objects within the physical environment; and

determining the scaling factor based on the one or more physical objects.

16. The method of any of claims 1-15, wherein at least a subset of the corresponding plurality of ER environments is respectively associated with a plurality of ER environments corresponding to a plurality of ER sessions that are distinct from each other, and wherein each of the plurality of ER environments enables graphical representations of individuals to be concurrently within the ER environment.

17. The method of claim 16, wherein each of at least a subset of the plurality of diorama-view representations corresponding to at least the subset of the corresponding plurality of ER environments includes one or more ER representations of one or more corresponding individuals that are associated with a respective ER session.

18. The method of claim 17, wherein each of the one or more corresponding individuals is associated with a respective access level that satisfies an access level criterion that is associated with the respective ER session.

19. The method of claim 16, further comprising:

playing, via a speaker of the electronic device, a first set of speech data while displaying the plurality of diorama-view representations, wherein the first set of speech data is associated with one or more corresponding individuals that are associated with a particular ER session associated with a respective ER environment;

obtaining, via an audio sensor of the electronic device, a second set of speech data from a user associated with the electronic device; and

providing the second set of speech data to the respective ER environment so that the second set of speech data is audible to the one or more corresponding individuals that are associated with the particular ER session.

20. The method of claim 16, wherein each of at least a subset of the plurality of diorama-view representations corresponding to at least the subset of the corresponding plurality of ER environments is associated with a respective ER session, and wherein displaying the plurality of diorama- view representations is based on historical data about the electronic device joining the plurality of ER sessions.

21. The method of any of claims 1-20, further comprising obtaining, via an eye tracking sensor, eye gaze data indicative of an eye gaze location, wherein displaying the plurality of diorama-view representations is based on the eye gaze data.

22. The method of any of claims 1-21, further comprising:

receiving, via the one or more input devices, a diorama-selection input that selects the first one of the plurality of diorama-view representations; and

in response to receiving the diorama-selection input:

displaying, via the display device, an ER environment associated with the first one of the plurality of diorama-view representations; and

ceasing to display the plurality of diorama-view representations.

23. The method of any of claims 1-22, further comprising displaying, within a particular one of the plurality of diorama-view representations, a recording of activity within a respective ER environment associated with the particular one of the plurality of diorama-view representations.

24. The method of any of claims 1-23, wherein displaying the plurality of diorama-view representations is based on control values.

25. An electronic device, comprising:

one or more processors;

a non-transitory memory;

one or more input devices;

a display device; and

one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing or causing performance of any of the methods of claims 1-24.

26. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device, cause the electronic device to perform or cause performance of any of the methods of claims 1-24.

27. An electronic device, comprising:

one or more processors;

a non-transitory memory;

one or more input devices;

a display device; and

means for performing or causing performance of any of the methods of claims 1-24.

28. An information processing apparatus for use in an electronic device, comprising: means for performing or causing performance of any of the methods of claims 1-24.

29. A method comprising:

at an electronic device including one or more processors, a non-transitory memory, one or more input devices, and a display device:

displaying, via the display device, a home enhanced reality (ER) environment characterized by home ER world coordinates, including a first diorama-view representation of a first ER environment, wherein the first diorama-view representation includes one or more of ER objects arranged in a spatial relationship according to first ER world coordinates;

detecting, via the one or more input devices, a first input that is directed to the first diorama-view representation; and

in response to detecting the first input, transforming the home ER environment by:

ceasing to display the first diorama-view representation within the home ER environment,

transforming the spatial relationship between a subset of the one or more ER objects as a function of the home ER world coordinates and the first ER world coordinates, and

displaying, via the display device, the subset of the one or more ER objects within the home ER environment based on the transformation.

30. The method of claim 29, further comprising:

while displaying the subset of the one or more ER objects within the home ER environment based on the transformation, detecting, via the one or more input devices, a second input; and

in response to detecting the second input, adding the subset of the one or more ER objects to the home ER environment.

31. The method of claim 29, further comprising:

while displaying the subset of the one or more ER objects within the home ER environment based on the transformation, detecting, via the one or more input devices, a second input; and

in response to detecting the second input, replacing the home ER environment with the first ER environment that includes the subset of the one or more ER objects.

32. The method of any of claims 29-31, wherein, in response to detecting the first input, displaying the subset of the one or more ER objects from a first viewing vector, the method further comprising:

while displaying the subset of the one or more ER objects within the home ER environment based on the transformation, detecting, via the one or more input devices, a second input; and

in response to detecting the second input, changing display of the subset of the one or more ER objects from the first viewing vector to a second viewing vector while maintaining the subset of the one or more ER objects arranged according to the first ER world coordinates.

33. The method of claim 32, wherein the second input is directed to the subset of the one or more ER objects.

34. The method of claim 33, wherein the one or more input devices includes a hand tracking sensor, the method further comprising:

detecting the second input via the hand tracking sensor;

obtaining hand tracking data from the hand tracking sensor based on the second input; and

determining, from the hand tracking data, that the second input is directed to the subset of the one or more ER objects.

35. The method of claim 33, wherein the one or more input devices includes an eye tracking sensor, the method further comprising:

detecting the second input via the eye tracking sensor;

obtaining eye tracking data from the eye tracking sensor based on the second input; and

determining, from the eye tracking data, that the second input is directed to the subset of the one or more ER objects.

36. The method of claim 32, wherein the home ER environment includes one or more physical objects that are associated with the home ER world coordinates, and wherein changing display of the subset of the one or more ER objects from the first viewing vector to the second viewing vector includes moving the subset of the one or more ER objects relative to the one or more physical objects.

37. The method of claim 32, wherein the second input corresponds to a change in position of the electronic device from a first pose to a second pose relative to the subset of the one or more ER objects, and wherein changing display of the subset of the one or more ER objects from the first viewing vector to the second viewing vector is based on the change in position of the electronic device from the first pose to the second pose.

38. The method of any of claims 29-37, further comprising:

obtaining, via an image sensor, environmental data bounded by a field-of-view associated with the image sensor, wherein the environmental data is associated with a physical environment including one or more physical objects;

identifying, within the environmental data, a particular one of the one or more physical objects located within a spatial proximity threshold of the subset of the one or more ER objects; and

moving the subset of the one or more ER objects relative to the one or more physical objects based on the particular one of the one or more physical objects.

39. The method of any of claims 29-38, further comprising:

displaying, via the display device, a plurality of diorama-view representations of a corresponding plurality of ER environments within the home ER environment, wherein the plurality of diorama-view representations includes the first diorama-view representation; and in response to detecting the first input, selecting the first diorama-view representation from the plurality of diorama-view representations.

40. The method of any of claims 29-39, further comprising:

displaying, via the display device, a plurality of diorama-view representations of a corresponding plurality of ER environments within the home ER environment, wherein the plurality of diorama-view representations includes the first diorama-view representation; and detecting, via the one or more input devices, a selection input that selects the first diorama-view representation from the plurality of diorama-view representations; and

in response to detecting the selection input, maintaining display of the first diorama-view representation within the home ER environment and ceasing to display the remainder of the plurality of diorama-view representations.

41. The method of claim 40, further comprising obtaining, via an eye tracking sensor, eye gaze data indicative of an eye gaze location, wherein the selection input is based on the eye gaze location.

42. The method of any of claims 29-41 wherein the first ER environment is associated with an ER session that enables respective graphical representations of individuals to be concurrently within the first ER environment.

43. The method of claim 42, wherein the first ER environment includes one or more ER representations respectively associated with one or more individuals that are connected to the ER session.

44. The method of claim 43, wherein each of the one or more individuals has a respective access level that satisfies an access level criterion that is associated with the ER session.

45. The method of claim 43, further comprising:

while displaying the first diorama-view representation:

playing, via a speaker of the electronic device, a first set of speech data that is associated with the one or more individuals that are connected to the ER session; obtaining, via an audio sensor of the electronic device, a second set of speech data from a user that is associated with the electronic device; and

providing the second set of speech data to the one or more individuals that are connected to the ER session.

46. The method of any of claims 29-45, further comprising:

obtaining a characterization vector that provides a spatial characterization of the first ER environment, wherein the characterization vector includes a plurality of object label values that respectively identify the one or more ER objects, and wherein the characterization vector also includes a plurality of relative position values providing respective positions of the one or more ER objects relative to each other; and

generating, from the first ER environment and the characterization vector, the first diorama-view representation.

47. The method of any of claims 29-46, wherein displaying the first diorama-view representation includes animating a portion of the one or more of ER object.

48. An electronic device, comprising:

one or more processors;

a non-transitory memory;

one or more input devices;

a display device; and

one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing or causing performance of any of the methods of claims 29-47.

49. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device, cause the electronic device to perform or cause performance of any of the methods of claims 29-47.

50. An electronic device, comprising:

one or more processors;

a non-transitory memory;

one or more input devices;

a display device; and

means for performing or causing performance of any of the methods of claims 29-47.

51. An information processing apparatus for use in an electronic device, comprising: means for performing or causing performance of any of the methods of claims 29-47.