Recherche dans les collections de brevets nationales et internationales
Certains contenus de cette application ne sont pas disponibles pour le moment.
Si cette situation persiste, veuillez nous contacter àObservations et contact
1. (WO2019049083) PROCÉDÉS ET SYSTÈMES DE GÉNÉRATION D'ENSEMBLES DE DONNÉES DÉTAILLÉS D'UN ENVIRONNEMENT PAR LE BIAIS DE L'EXÉCUTION D'UN JEU
Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

CLAIMS

What is claimed is:

1. A computer-implemented method, comprising:

obtaining, from a camera of a computing device, an image of an environment;

determining, based on a first comparison of the image to a stored dataset in a database, that the stored dataset lacks one or more details of the environment;

providing a command by the computing device that indicates a request to obtain additional data of the environment;

in response to the command, obtaining, from one or more sensors of the computing device, additional data of the environment;

determining, based on a second comparison to the stored dataset in the database, that the additional data of the environment differs from data of the environment in the stored dataset; and

based on the additional data of the environment differing from data of the environment in the stored dataset, providing one or more points for gameplay on an interface of the computing device.

2. The computer-implemented method of claim 1, wherein the request indicates one or more areas of the environment at which to obtain the additional data of the environment.

3. The computer-implemented method of claim 1, wherein the environment comprises a house, and wherein providing the command comprises:

providing information indicating a request to obtain additional data of a specific room in the house.

4. The computer-implemented method of claim 1, wherein the one or more sensors of the computing device include a depth camera and wherein obtaining, from the one or more sensors of the computing device, the additional data of the environment comprises: obtaining one or more depth images of the environment.

5. The computer-implemented method of claim 1, wherein the one or more sensors of the computing device include a microphone, and wherein obtaining, from the one or more sensors of the computing device, the additional data of the environment comprises: obtaining, using the microphone, audio from the environment.

6. The computer-implemented method of claim 1, further comprising:

generating a floorplan of the environment based on the additional data of the environment.

7. The computer-implemented method of claim 1, wherein the image of the environment comprises an image of an object, and the method further comprises;

receiving, from a server, an identification of the object based on the image of the object;

determining, based on the first comparison to the stored dataset in the database, that the stored dataset lacks one or more details of the object;

providing in the command a further request to obtain additional data of the object; obtaining, from the one or more sensors of the computing device, additional data of the object;

determining, based on the second comparison to the stored dataset in the database, that the additional data of the object differs from data of the object in the stored dataset; and based on the additional data of the object differing from data of the object in the stored dataset, providing the one or more points for gameplay on the interface.

8. The computer-implemented method of claim 7, further comprising:

labeling the additional data of the object with the identification of the object; and storing the additional data of the object in the database.

9. The computer-implemented method of claim 7, wherein providing the command comprises:

providing information indicating a pose of the object at which to obtain the additional data of the object.

10. The computer-implemented method of claim 7, wherein providing the command comprises:

providing a time of day at which to obtain the additional data of the object.

11. The computer-implemented method of claim 1, further comprising:

receiving, from a server, an identification of a room in the environment based on the image of the environment;

determining a category of objects associated with the room in the environment; and providing a second command indicating a request to obtain, using the one or more sensors of the computing device, data of at least one object in the category of objects associated with the room in the environment.

12. The computer-implemented method of claim 1, further comprising:

providing, on a display of the computing device, an augmented reality (AR) graphical character overlaid onto a view of the environment; and

causing the AR graphical character to move on the display of the computing device as an indication to travel into an area of the environment for which the stored dataset in the database lacks the one or more details of the environment.

13. The computer-implemented method of claim 1,

wherein the image of the environment includes a person performing an action, wherein determining, based on the first comparison to the stored dataset in the database, that the stored dataset lacks one or more details of the environment comprises determining that the stored dataset lacks an identification of the action associated with the image of the person, and

the method further comprises providing in the command a request to obtain the identification of the action.

14. The computer-implemented method of claim 1,

wherein the image of the environment includes a face of a person,

wherein determining, based on the first comparison to the stored dataset in the database, that the stored dataset lacks one or more details of the environment comprises determining that the stored dataset lacks an identification of an emotion associated with the image of the face of the person, and

the method further comprises providing in the command a request to obtain the identification of the emotion.

15. The computer-implemented method of claim 1,

wherein the image of the environment includes a person,

wherein the command further indicates a request for the person to perform one or more actions, and

the method further comprises obtaining, from the one or more sensors of the computing device, the additional data of the environment including additional images of the person performing the one or more actions.

16. A computing device comprising:

a camera;

one or more sensors;

at least one processor;

memory; and

program instructions, stored in the memory, that upon execution by the at least one processor cause the computing device to perform operations comprising:

obtaining, from the camera, an image of an environment;

determining, based on a first comparison of the image to a stored dataset in a database, that the stored dataset lacks one or more details of the environment;

providing a command that indicates a request to obtain additional data of the environment;

in response to the command, obtaining, from the one or more sensors, additional data of the environment;

determining, based on a second comparison to the stored dataset in the database, that the additional data of the environment differs from data of the environment in the stored dataset; and

based on the additional data of the environment differing from data of the environment in the stored dataset, providing one or more points for gameplay on an interface of the computing device.

17. The computing device of claim 16, wherein the image of the environment comprises an image of an object, and wherein the at least one processor executes the program instructions to further perform operations comprising:

receiving, from a server, an identification of the object based on the image of the object;

determining, based on the first comparison to the stored dataset in the database, that the stored dataset lacks one or more details of the object;

providing in the command a further request to obtain additional data of the object; obtaining, from the one or more sensors of the computing device, additional data of the object;

determining, based on the second comparison to the stored dataset in the database, that the additional data of the object differs from data of the object in the stored dataset; and based on the additional data of the object differing from data of the object in the stored dataset, providing the one or more points for gameplay on the interface.

18. The computing device of claim 17, wherein providing the command further object comprises:

providing information indicating a pose of the object at which to obtain the additional data of the object.

19. A non-transitory computer-readable medium having stored therein instructions, that when executed by a computing device, cause the computing device to perform functions comprising:

obtaining, from a camera of the computing device, an image of an environment;

determining, based on a first comparison of the image to a stored dataset in a database, that the stored dataset lacks one or more details of the environment;

providing a command that indicates a request to obtain additional data of the environment;

in response to the command, obtaining, from one or more sensors of the computing device, additional data of the environment;

determining, based on a second comparison to the stored dataset in the database, that the additional data of the environment differs from data of the environment in the stored dataset; and

based on the additional data of the environment differing from data of the environment in the stored dataset, providing one or more points for gameplay on an interface of the computing device.

20. The non-transitory computer-readable medium of claim 19, wherein the functions further comprise:

providing, on a display of the computing device, an augmented reality (AR) graphical character overlaid onto a view of the environment; and

causing the AR graphical character to move on the display of the computing device as an indication to travel into an area of the environment for which the stored dataset in the database lacks the one or more details of the environment.