Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2020117536 - MODÈLES D'ESPACE DESTINÉS À UNE RÉALITÉ MIXTE

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

CLAIMS

1. A method performed by one or more computing devices, the method comprising: providing a mixed reality system, the mixed reality system comprising a display, a camera, a three-dimensional model, and a processor, the mixed reality system rendering graphics on the display according to the three-dimensional model and orientations of the camera and/or the display, the camera capturing video data of a physical space;

providing a space model, the space model comprising a space hierarchy, the space hierarchy comprising space nodes structured as a tree, each node comprising respective node metadata, one or more of the nodes comprising rules or code that operate on the metadata, each node comprising a respective node type that corresponds to a type of physical space, wherein metadata of some nodes comprise aggregations of metadata from child nodes in the space hierarchy, some of the nodes further comprising device representations that represent physical and/or virtual sensors associated therewith;

receiving, by the space model, sensor readings from the physical and/or virtual sensors, updates the node metadata accordingly, and reporting the updated node metadata to the mixed reality system; and

receiving, by the mixed reality system, the updated node metadata from the space model, updating the three-dimensional model according thereto, and rendering, on the display, a view of the updated three-dimensional model.

2. A method according to claim 1, wherein the updated node metadata received by the mixed reality system comprises an identifier of a node in the space hierarchy, the method further comprising, according to the identifier, correlating the updated node metadata received by the mixed reality system with the three-dimensional model, wherein the updating of the three-dimensional model is performed according to the correlating.

3. A method according to claim 1, wherein the rendering provides a user of the mixed reality system with visual feedback about the physical and/or virtual sensors.

4. A method according to claim 1, wherein the readings are from physical sensors, wherein the physical sensors are located in physical spaces that correspond to respective nodes in the space hierarchy that comprise representations of the physical sensors.

5. A method according to claim 4, wherein the rendering comprises generating graphics that represent measurements from the physical sensors, and wherein a cloud comprises a cloud service, and the cloud service provides the space model.

6. A method according to claim 5, wherein the cloud further comprises a metadata service, the method further comprising querying the metadata service by the mixed reality system and receiving the node metadata from the metadata service in response.

7. A method according to claim 1, wherein the readings are from virtual sensors, wherein the virtual sensors are represented by respective objects in the three-dimensional model of the mixed reality system, wherein the virtual sensors are also represented in the space model, the method further comprising outputting a sensor reading from a virtual sensor to the mixed reality system, and rendering a view of the mixed reality system that includes a graphic representation of the virtual sensor and graphics determined according to node metadata from the space model that corresponds to the sensor reading from the virtual sensor.

8. A method performed by one or more computing devices comprising processing hardware and storage hardware, the storage hardware storing instructions configured to, when executed by the processing hardware, perform the method, the method comprising: providing a space model accessible via a data network, the space model comprising a data structure comprising space nodes representing respective spaces, the space nodes organized as a graph, some of the space nodes comprising device representations, each device representation comprising state of a respective virtual or physical device that is updated by a corresponding physical or virtual device, wherein the state of each device representation is updated to include measures from respective virtual and/or physical sensors;

providing a mixed reality system comprising a camera and a three-dimensional model, the mixed reality system rendering views of the three-dimensional model according to locations and orientations of the camera, the mixed reality system in communication with the space model;

receiving, by the mixed reality system, from the space model, sensor metadata, the sensor metadata provided by the space model and comprising or derived from the measures of the virtual and/or physical sensors; and

rendering, by the mixed reality system, views of the three-dimensional model according to locations and orientations of the camera and according to the sensor metadata received from the space model.

9. A method according to claim 8, further comprising maintaining information that correlates some of the space nodes with locations in the three-dimensional space of the mixed reality system and based thereon correlating the sensor metadata with a location in the three-dimensional space.

10. A method according to claim 8, wherein the mixed reality system aligns a view of the three-dimensional model with a physical space being captured by the camera, and wherein graphics generated for the view are generated according to the sensor metadata.

11. A method according to claim 8, wherein the three-dimensional model includes a model of the virtual sensor, and a view of the three-dimensional model rendered by the mixed reality system includes the model of the virtual sensor, whereby graphics for the view include graphics portraying the virtual sensor.

12. A method according to claim 11, wherein the mixed reality system responds to user inputs directed to the graphics portraying the virtual sensor by generating a new measure that is provided to the space model and that is stored in a device representation of the virtual sensor.

13. Computer-readable storage hardware storing information configured to cause one or more computing devices to perform a process, the process comprising:

managing a space model that models the containment relationships among places in a physical space, wherein some physical places contains other physical places and the space model includes place nodes and links therebetween that reflect the containment relationships, the links and place nodes comprising a hierarchy corresponding to the hierarchical containment relationships between the physical places, some nodes comprising representations of physical sensors in respective physical places, wherein each physical sensor in a physical place has a respective representation in a place node corresponding to the physical place, and wherein each representation stores a measure from its respectively represented physical sensor;

establishing a communication link between the space model and a mixed reality system;

receiving, by the mixed reality system, via the communication link, sensor values comprising or derived from the measures in the representations; and

rendering, by the mixed reality system, according to the sensor values, graphics of a three-dimensional model of at least part of the physical place.

14. Computer-readable storage hardware according to claim 13, wherein some of the nodes comprise respective user code and/or user rules, the user code and/or user rules evaluating state of the space model and generating events when the state satisfies conditions in the user code and/or the user rules, the events conveying the sensor values to the mixed reality system.

15. Computer-readable storage hardware according to claim 13, wherein the obtained sensor value is obtained from a parent node having child nodes with corresponding device representations, and wherein the obtained sensor value comprises an aggregation of measures from the child device representations, and wherein the mixed reality system provides a user thereof with visual feedback of the measures from the physical sensors.