Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2019008320) VIRTUAL MEETING PARTICIPANT RESPONSE INDICATION METHOD AND SYSTEM
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

CLAIMS

1. A system for indicating emotive responses in a virtual meeting, the system comprising:

at least one processor; and

a memory storing instructions, which instructions being executable by the at least one processor to:

create or select avatar data defining one or more avatars to represent one or more corresponding users in response to input from the one or more corresponding users;

receive one or more user selections of meeting data defining one or more virtual meetings, a user selection comprising an indication that the user is attending the virtual meeting;

generate an output for display of a virtual meeting with one or more avatars representing one or more users attending the meeting using the avatar data and the meeting data corresponding to the virtual meeting;

receive emotive input data from one or more users indicative of an emotive response or body language of the one or more users attending the virtual meeting;

process the avatar data using the emotive input data; and

update the output for display of the virtual meeting to render the one or more avatars for the one or more users to display a respective emotive state dependent upon the respective emotive input data.

2. A system according to claim 1, wherein the instructions comprise instructions executable by the at least one processor to render the one or more avatars to display body language associated with the emotive input data.

3. A system according to claim 1 or claim 2, including instructions executable by the at least one processor to receive video data for a meeting, wherein the video data includes video images of one or more participants in a meeting, and the instructions executable by the at least one processor to generate the output for display comprise instructions executable by the at least one processor to generate the output for display as an augmented reality meeting with one or more avatars representing one or more users overlaid on the video data with the video images of the participants.

4. A system according to any preceding claim, including instructions executable by the at least one processor to store a predefined set of emotive states, wherein instructions executable by the at least one processor to receive the emotive input data comprise instructions to receive the emotive input data as a selection of an output for display of a menu of the emotive states.

5. A system according to any preceding claim, including instructions executable by the at least one processor to receive interaction input from one or more users attending the virtual meeting to cause the avatars to perform required interaction, and to update the output for display of the virtual meeting to render the one or more avatars for the one or more users from which interaction data is received to display the required interaction.

6. A method of indicating emotive responses in a virtual meeting, the method comprising:

creating or select avatar data defining one or more avatars to represent one or more corresponding users in response to input from the one or more corresponding users;

receiving one or more user selections of meeting data defining one or more virtual meetings, a user selection comprising an indication that the user is attending the virtual meeting;

generating an output for display of a virtual meeting with one or more avatars representing one or more users attending the meeting using the avatar data and the meeting data corresponding to the virtual meeting;

receiving emotive input data from one or more users indicative of an emotive response or body language of the one or more users attending the virtual meeting;

processing the avatar data using the emotive input data; and

updating the output for display of the virtual meeting to render the one or more avatars for the one or more users to display a respective emotive state dependent upon the respective emotive input data.

7. A method according to claim 6, wherein the one or more avatars are rendered to display body language associated with the emotive input data.

8. A method according to claim 6 or claim 7, including receiving video data for a meeting, wherein the video data includes video images of one or more participants in a meeting, and the output is generated for display as an augmented reality meeting with one or more avatars representing one or more users overlaid on the video data with the video images of the participants.

9. A method according to any one of claims 6 to 8, including storing a predefined set of emotive states, wherein the emotive input data is received as a selection of an output for display of a menu of the emotive states.

10. A system according to any one of claims 6 to 9, including receiving interaction input from one or more users attending the virtual meeting to cause the avatars to perform required interaction, and updating the output for display of the virtual meeting to render the one or more avatars for the one or more users from which interaction data is received to display the required interaction.

11. A carrier medium carrying processor executable code for execution by a processor to carry out the method of any one of claims 6 to 10.

12. A non-transient storage medium storing processor executable code for execution by a processor to carry out the method of any one of claims 6 to 10.