Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020112513 - PERSPECTIVE SHUFFLING IN VIRTUAL CO-EXPERIENCING SYSTEMS

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

What is claimed is:

1. A method comprising:

by a first computing device associated with a first user, connecting to a virtual session for co-experiencing digital media content with one or more other users in a virtual reality environment, wherein the virtual reality environment comprises a screen for displaying the digital media content;

by the first computing device, receiving relative-position information indicating relative positions between the first user and the one or more other users in the virtual reality environment;

by the first computing device, rendering the screen based on a first position of the first user in the virtual reality environment, wherein the screen and the first position of the first user have a predefined spatial relationship in the virtual reality environment; and

by the first computing device, rendering, based on the received relative-position information and the first position of the first user, a second avatar representing a second user in the virtual reality environment, wherein the second user is one of the one or more other users; wherein on a second computing device associated with a second user of the one or more other users:

the screen and a first avatar representing the first user are rendered based on a second position associated with the second user in the virtual reality environment;

the screen rendered by the second computing device and the second position of the second user have the predefined spatial relationship in the virtual reality environment; and

the screen rendered by the second computing device and the first avatar representing the first user have a different spatial relationship in the virtual reality environment than the predefined spatial relationship.

2. The method of Claim 1, wherein connecting to the virtual session is in response to an invitation from a third computing device.

3. The method of Claim 2, wherein connecting to the virtual session comprises sending a join request to the third computing device, wherein the join request comprises an identifier of the first user and an identifier for the first avatar selected by the first user; wherein the third computing device assigns a relative position to the first user in the virtual reality environment; and wherein the third computing device maintains associations between participating users and their corresponding avatars at respective positions.

4. The method of Claim 3, wherein the third computing device is a server managing the virtual session, and wherein communication messages between the computing devices are routed via the server.

5. The method of Claim 3, wherein the third computing device is associated with a user hosting the virtual session.

6. The method of Claim 1, wherein the predefined spatial relationship between the screen and a position of a user in the virtual reality environment is that the screen is positioned at a predetermined distance from the position and the screen is centered at and perpendicular to a sightline of the user when the user faces forward.

7. The method of Claim 1, further comprising:

by the first computing device, receiving a notification that a facing direction of the second user has changed from a first direction to a second direction; and

by the first computing device, re-rendering the second avatar corresponding to the second user in the virtual reality environment rendered by the first computing device to synchronize a facing direction of the second avatar to the facing direction of the second user.

8. The method of Claim 10, wherein re-rendering the second avatar comprises:

determining that the second direction is within a pre-determined range of directions to the screen within the virtual reality environment rendered by the second computing device; and rendering, in response to the determination, the second avatar such that the second avatar faces the screen in the virtual reality environment rendered by the first computing device.

9. The method of Claim 2, further comprising:

by the first computing device, receiving, from the third computing device, a notification that the second user has left the virtual session, wherein the notification comprises an identifier of the second user; and

by the first computing device, removing the second avatar corresponding to the second user from the virtual reality environment.

10. The method of Claim 2, further comprising:

by the first computing device, receiving, from the third computing device, a notification that a new user has joined the virtual session, wherein the notification comprises an identifier of the new user and relative-position information comprising relative position of the new user; and by the first computing device, rendering, based on the first position and the received relative-position information, a third avatar corresponding to the new user in the virtual reality environment.

11. One or more computer-readable non-transitory storage media embodying software that is operable on a first computing device associated with a first user when executed to:

connect to a virtual session for co-experiencing digital media content with one or more other users in a virtual reality environment, wherein the virtual reality environment comprises a screen for displaying the digital media content;

receive relative-position information indicating relative positions between the first user and the one or more other users in the virtual reality environment;

render the screen based on a first position of the first user in the virtual reality environment, wherein the screen and the first position of the first user have a predefined spatial relationship in the virtual reality environment; and

render, based on the received relative-position information and the first position of the first user, a second avatar representing a second user in the virtual reality environment, wherein the second user is one of the one or more other users;

wherein on a second computing device associated with a second user of the one or more other users:

the screen and a first avatar representing the first user are rendered based on a second position associated with the second user in the virtual reality environment;

the screen rendered by the second computing device and the second position of the second user have the predefined spatial relationship in the virtual reality environment; and

the screen rendered by the second computing device and the first avatar representing the first user have a different spatial relationship in the virtual reality environment than the predefined spatial relationship.

12. The media of Claim 11, wherein connecting to the virtual session is in response to an invitation from a third computing device.

13. The media of Claim 12, wherein connecting to the virtual session comprises sending a join request to the third computing device, wherein the join request comprises an identifier of the first user and an identifier for the first avatar selected by the first user; wherein the third computing device assigns a relative position to the first user in the virtual reality environment; and wherein the third computing device maintains associations between participating users and their corresponding avatars at respective positions.

14. The media of Claim 13, wherein the third computing device is a server managing the virtual session.

15. A first computing device associated with a first user comprising:

one or more processors; and

one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to:

connect to a virtual session for co-experiencing digital media content with one or more other users in a virtual reality environment, wherein the virtual reality environment comprises a screen for displaying the digital media content;

receive relative-position information indicating relative positions between the first user and the one or more other users in the virtual reality environment;

render the screen based on a first position of the first user in the virtual reality environment, wherein the screen and the first position of the first user have a predefined spatial relationship in the virtual reality environment; and

render, based on the received relative-position information and the first position of the first user, a second avatar representing a second user in the virtual reality environment, wherein the second user is one of the one or more other users;

wherein on a second computing device associated with a second user of the one or more other users:

the screen and a first avatar representing the first user are rendered based on a second position associated with the second user in the virtual reality environment;

the screen rendered by the second computing device and the second position of the second user have the predefined spatial relationship in the virtual reality environment; and

the screen rendered by the second computing device and the first avatar representing the first user have a different spatial relationship in the virtual reality environment than the predefined spatial relationship.