Traitement en cours

Veuillez attendre...

PATENTSCOPE sera indisponible durant quelques heures pour des raisons de maintenance le mardi 26.10.2021 à 12:00 PM CEST
Paramétrages

Paramétrages

Aller à Demande

1. WO2020161701 - DISPOSITIF DE TÉLÉCONFÉRENCE

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

CLAIMS:

1. A teleconferencing device comprising:

a propulsion system for flying the teleconferencing device having a surface, the propulsion system capable of making the teleconferencing device hover in place and change its position; a projection unit capable of projecting images on at least a given part of the surface having a fixed position with respect to the projection unit;

at least one sensor capable of obtaining information enabling mapping an environment surrounding the teleconferencing device; and

a processing unit, configured to:

obtain information from the at least one sensor;

map the environment surrounding the teleconferencing device, using the obtained information, the environment including at least one user of the teleconferencing device; track, within the mapped environment, a position and an orientation of at least one user of the teleconferencing device with respect to the teleconferencing device;

determine a desired position and orientation of the given part of the surface with respect to the at least one user based at least on the tracked position and orientation of the at least one user and on one or more session related parameters;

activate the propulsion system to fly the teleconferencing device to the determined desired position and orientation upon the given part of the surface not being positioned in the determined desired position and orientation;

receive a stream of images captured by a remote device; and

instruct the projection unit to project the received stream of images on the given part of the surface.

2. The teleconferencing device of claim 1, wherein the session related parameters include a measured signal strength or signal quality of a network connection through which the stream of images is received.

3. The teleconferencing device of claim 1, wherein the processing unit is further configured to estimate a viewing quality of the images viewed by the user, and wherein the session related parameters include the estimated viewing quality.

4. The teleconferencing device of claim 1, further comprising at least one speaker, and wherein the processing unit is further configured to:

receive a stream of sound captured by the remote device;

output the sound to the at least one user via the at least one speaker; and

estimate a sound quality of the sound received by the user; and

wherein the session related parameters include the estimated sound quality.

5. The teleconferencing device of claim 1, further comprising at least one microphone, and wherein the processing unit is further configured to acquire sound using the microphone and determine an ambient noise level by analyzing the acquired sound, and wherein the session related parameters include the determined ambient noise level.

6. The teleconferencing device of claim 1 , wherein the processing unit is further configured to determine at least one of (a) amounts of light and (b) directions of light, in a respective plurality of positions in the environment surrounding the teleconferencing device, and wherein the session related parameters include the determined amounts of light or directions of light.

7. The teleconferencing device of claim 1, further comprising a mechanical attachment capable of attaching to a balloon for causing air buoyancy of the teleconferencing device, and wherein the surface is a surface of the balloon.

8. The teleconferencing device of claim 7, wherein the propulsion system comprises air jets.

9. The teleconferencing device of claim 7, wherein the hovering is obtained by air buoyancy caused by the balloon.

10. The teleconferencing device of claim 1, wherein the desired position and orientation is determined so that a clear line of sight is maintained between the given part of the surface and the at least one user.

11. The teleconferencing device of claim 1 , wherein the sensor is at least one camera.

12. The teleconferencing device of claim 1, wherein the processing unit is further configured to classify a use scenario of the teleconferencing device, utilizing the mapped environment and using a use scenario classifier; and wherein the desired position and orientation is determined using the use scenario.

13. The teleconferencing device of claim 12, wherein the use scenario classifier is configured to classify the mapped environment into a given use scenario of a plurality of pre-classified use scenarios, each pre-classified use scenario of the pre-classified use scenarios simulating a respective distinct behavior of a physically present user had the teleconferencing device been the physically present user.

14. The teleconferencing device of claim 13, wherein the classifier performs the classification based on one or more of: an activity performed by the user, a facial expression of the user, a voice volume of the user, a vocal expression of the user, a change in body movement rate of the user, a change in the user’s body position, or a change in the user’s body behavior.

15. The teleconferencing device of claim 12, wherein the use scenario classifier is a machine learning classifier.

16. A method of operating a teleconferencing device, the method comprising:

a propulsion system for flying the teleconferencing device having a surface, the propulsion system capable of making the teleconferencing device hover in place and change its position; a projection unit capable of projecting images on at least a given part of the surface having a fixed position with respect to the projection unit;

at least one sensor capable of obtaining information enabling mapping an environment surrounding the teleconferencing device; and

a processing unit, configured to:

obtaining information from at least one sensor of the teleconferencing device, the sensor capable of obtaining information enabling mapping an environment surrounding the teleconferencing device;

mapping the environment surrounding the teleconferencing device, using the obtained information, the environment including at least one user of the teleconferencing device;

tracking, within the mapped environment, a position and an orientation of at least one user of the teleconferencing device with respect to the teleconferencing device;

determining a desired position and orientation of the given part of a surface of the teleconferencing device with respect to the at least one user based at least on the tracked position and orientation of the at least one user and on one or more session related parameters;

activating a propulsion system of the teleconferencing device to fly the teleconferencing device to the determined desired position and orientation upon the given part of the surface not being positioned in the determined desired position and orientation, wherein the propulsion system is capable of making the teleconferencing device hover in place and change its position;

receiving a stream of images captured by a remote device; and

instructing a projection unit, capable of projecting images on at least a given part of the surface having a fixed position with respect to the projection unit, to project the received stream of images on the given part of the surface.

17. The method of claim 16, wherein the session related parameters include a measured signal strength or signal quality of a network connection through which the stream of images is received.

18. The method of claim 16, further comprising estimating a viewing quality of the images viewed by the user, and wherein the session related parameters include the estimated viewing quality.

19. The method of claim 16, further comprising:

receiving a stream of sound captured by the remote device;

outputting the sound to the at least one user via at least one speaker of the teleconferencing device; and

estimating a sound quality of the sound received by the user; and

wherein the session related parameters include the estimated sound quality.

20. The method of claim 16, further comprising acquiring sound using a microphone of the teleconferencing device and determining an ambient noise level by analyzing the acquired sound, and wherein the session related parameters include the determined ambient noise level.

21. The method of claim 16, further comprising determining at least one of (a) amounts of light and (b) directions of light, in a respective plurality of positions in the environment surrounding the teleconferencing device, and wherein the session related parameters include the determined amounts of light or directions of light.

22. The method of claim 16, wherein the teleconferencing device further comprises a mechanical attachment capable of attaching it to a balloon for causing air buoyancy of the teleconferencing device, and wherein the surface is a surface of the balloon.

23. The method of claim 22, wherein the propulsion system comprises air jets.

24. The method of claim 22, wherein the hovering is obtained by air buoyancy caused by the balloon.

25. The method of claim 16, wherein the desired position and orientation is determined so that a clear line of sight is maintained between the given part of the surface and the at least one user.

26. The method of claim 16, wherein the sensor is at least one camera.

27. The method of claim 16, further comprising classifying a use scenario of the teleconferencing device, utilizing the mapped environment and using a use scenario classifier; and wherein the desired position and orientation is determined using the use scenario.

28. The method of claim 27, wherein the use scenario classifier is configured to classify the mapped environment into a given use scenario of a plurality of pre-classified use scenarios, each pre-classified use scenario of the pre-classified use scenarios simulating a respective distinct behavior of a physically present user had the teleconferencing device been the physically present user.

29. The method of claim 28, wherein the classifier performs the classification based on one or more of: an activity performed by the user, a facial expression of the user, a voice volume of the user, a vocal expression of the user, a change in body movement rate of the user, a change in the user’s body position, or a change in the user’s body behavior.

30. The method of claim 27, wherein the use scenario classifier is a machine-learning classifier.