Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2019028479) SYSTEMS, METHODS AND APPARATUSES FOR DEPLOYMENT AND TARGETING OF CONTEXT-AWARE VIRTUAL OBJECTS AND BEHAVIOR MODELING OF VIRTUAL OBJECTS BASED ON PHYSICAL PRINCIPLES
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

CLAIMS

What is claimed is:

1. A method of an augmented reality environment, the method, comprising:

presenting a depiction of an object in the augmented reality environment, the depiction of the object being observable in the augmented reality environment;

identifying a physical law of the real world, in accordance with which, behavioral characteristics of the object in the augmented reality environment are to be governed;

wherein, the physical law is identified based on one or more of:

real world characteristics of a real world environment associated with the augmented reality environment;

virtual characteristics of a virtual environment in the augmented reality environment.

2. The method of claim 1,

wherein the object is presented in the virtual environment;

further wherein, the virtual environment is observed by a human user to be overlaid or superimposed over a representation of the real world environment, in the augmented reality environment.

3. The method of claim 1, further comprising:

updating the depiction of the object in the augmented reality environment, based on the physical law.

4. The method of claim 1,

wherein the real world characteristics include one or more of,

(i) natural phenomenon of the real world environment, and characteristics of the natural phenomenon;

(ii) physical things of the real world environment, and an action, behavior or characteristics of the physical things;

(iii) a human user in the real world environment, and action or behavior of the human user.

5. The method of claim 1,

wherein the virtual world characteristics of the virtual environment, include one or more of,

(i) virtual phenomenon of the virtual environment;

(ii) characteristics of a natural phenomenon which the virtual phenomenon emulates;

(iii) virtual things of the virtual world environment, and action, behavior or characteristics of the virtual things;

(iv) a virtual actor in the virtual world environment, and action or behavior of the virtual actor.

6. The method of claim 1, wherein, the behavioral characteristics includes properties or actions of a real world object which the object depicts or represents.

7. The method of claim 1, wherein, the behavioral characteristics govern, one or more of, proactive behavior, reactive behavior, steady state action of the object in the augmented reality environment

8. The method of claim 1, further comprising, generating a behavioral profile for the object modeled based on one or more physical laws of the real world, wherein, the behavioral profile includes the behavioral characteristics.

9. The method of claim 8, wherein, the physical laws include, one or more of, laws of nature, a law of gravity, a law of motion, electrical properties, magnetic properties, optical properties, Pascal's principle, laws of reflection or refraction, a law of thermodynamics, Archimedes' principle or a law of buoyancy, mechanical properties of materials; wherein, the mechanical properties of materials include, one or more of: elasticity, stiffness, yield, ultimate tensile strength, ductility, hardness, toughness, fatigue strength, endurance limit.

10. A system, comprising:

means for, generating a depiction of a virtual object in an augmented reality environment, the depiction of the object being detectable by human perception in the augmented reality environment;

means for, using a physical principle of the real world to model behavioral characteristics of the virtual object in the augmented reality environment;

means for, updating the depiction of the object in the augmented reality environment, based on the physical principle.

11. The system of claim 10:

wherein, the physical principle is identified based on one or more of:

real world characteristics of a real world environment associated with the augmented reality environment;

virtual characteristics of a virtual environment in the augmented reality environment; wherein, the depiction of the object that is updated in the augmented reality environment, includes one or more of, a visual update, an audible update, a sensory update, a haptic update, a tactile update and an olfactory update.

12. The system of claim 10,

wherein, the behavioral characteristics includes properties or actions of a real world object which the virtual object depicts or represents.

13. The system of claim 12,

wherein, virtual object represents a virtual place;

wherein a human user of the augmented reality environment, is able to enter the virtual place represented by the virtual object;

wherein, entering the virtual object, the virtual place within the virtual object world is accessible by the human user.

14. The system of claim 13,

wherein, virtual object further comprises interior structure or interior content;

wherein, the interior content is consumable by a human user, on entering the virtual object;

wherein, the internal structure is perceivable by the human user, on entering the virtual object.

15. An apparatus to present virtual content in a target environment, the apparatus, comprising:

a processor;

memory having stored having stored thereon instructions, which when executed by a processor, cause the processor to:

detect an indication that a content segment being consumed in the target environment has virtual content associated with it;

present the virtual content for consumption in target environment;

wherein, the virtual content is contextually relevant to the target environment.

16. The apparatus of claim 15, wherein, the processor is further operable to:

capture contextual information for the target environment;

generate, the virtual content that is presented for consumption, based on contextual metadata in the contextual information;

wherein, the virtual content that is associated with the content segment and presented in the target environment is generated on demand.

17. The apparatus of claim 15, wherein, the processor is further operable to:

capture contextual information for the target environment;

retrieve the virtual content that is presented for consumption, based on contextual metadata in the contextual information.

18. The apparatus of claim 17, wherein,

wherein, the virtual content is retrieved at least in part from a remote repository in response to querying the remote repository using the contextual metadata.

19. The apparatus of claim 15, wherein, the processor is further operable to:

capture contextual information for the target environment;

wherein, contextual information includes, one or more of:

identifier of a device used to consume the content segment in the target environment; timing data associated with consumption of the content segment in the target environment; software on the device;

cookies on the device;

indications of other virtual objects on the device.

20. The apparatus of claim 15, wherein, the processor is further operable to:

capture contextual information for the target environment;

wherein, contextual information includes, one or more of:

identifier of a human user in the target environment;

timing data associated with consumption of the content segment in the target environment; interest profile of the human user;

behavior patterns of the human user;

pattern of consumption of the content segment;

attributes of the content segment;

21. The apparatus of claim 15, wherein, the processor is further operable to:

capture contextual information for the target environment;

wherein, contextual information includes, one or more of:

pattern of consumption of the content segment;

attributes of the content segment;

location data associated with the target environment;

timing data associated with the consumption of the content segment.

22. The apparatus of claim 15, wherein,

the content segment includes a segment of one or more of, content in a print magazine, a billboard, a print ad, a board game, a card game, printed text, any printed document.

23. The apparatus of claim 15, wherein,

the content segment includes a segment of one or more of, TV production, TV ad, radio broadcast, a film, a movie, a print image or photograph, a digital image, a video, digitally rendered text, a digital document, any digital production, a digital game, a webpage, any digital publication.

24. The apparatus of claim 15, wherein,

the indication that the content segment being consumed in the target environment has virtual content associated with it includes, one or more of:

a pattern of data embedded in the content segment;

visual markers in the content segment, the visual markers being perceptible or imperceptible to a human user;

sound markers or a pattern of sound embedded in the content segment, the sound markers being perceptible or imperceptible to a human user.

25. The apparatus of claim 15, wherein,

the indication is determined through analysis of content type of the content segment being consumed.

26. A machine-readable storage medium, having stored thereon instructions, which when executed by a processor, cause the processor to implement a method to provide an augmented reality workspace in a physical space, the method, comprising:

rendering a virtual object as a user interface element of the augmented reality workspace;

wherein, the virtual object is rendered in a first animation state, in accordance with state information associated with the virtual object;

wherein, the user interface element of the augmented reality workspace is rendered as being present in the physical space and able to be interacted with in the physical space;

responsive to actuation of the virtual object, transitioning the virtual object into a second animation state in the augmented reality workspace in accordance with the state information associated with the virtual object;

further change a position or orientation of the virtual object in the augmented reality workspace, responsive to a shift in view perspective of the augmented reality workspace.

27. The method of claim 26, further comprising,

rendering objects contained in the virtual object, or linked objects of the virtual object in the second animation state; or

rendering objects contained in the virtual object, or linked objects of the virtual object in a third animation state responsive to further activation of the virtual object, in accordance with the state information.

28. The method of claim 26, wherein, the shift in the view perspective is triggered by a motion of, one or more of:

a user of the augmented reality work space;

a device used to access the augmented reality workspace;

further comprising, detecting a speed or acceleration of the motion;

wherein, acceleration or speed of the change of the position or orientation of the virtual object depends on a speed or acceleration of the motion of the user or the device.

29. The method of claim 26, wherein, the actuation is detected from one or more of, an image based sensor, a haptic or tactile sensor, a sound sensor or a depth sensor.

30. The method of claim 26, wherein, the user interface element represented by the virtual object includes one or more of, a folder, a file, a data record, a document, an application, a system file, a trash can, a pointer, a menu, a task bar, a launch pad, a dock, a lasso tool.

31. The method of claim 26, wherein the actuation is detected from input submitted via, one or more of, a virtual laser pointer, a virtual pointer, a lasso tool, a gesture sequence of a human user in the physical space,

32. The method of claim 26, wherein, the augmented reality workspace is depicted in an augmented reality interface via one or more of, a mobile phone, a glasses, a smart lens and a headset device; wherein, augmented reality workspace is depicted in 3D in the physical space and the virtual object is viewable in substantially 360 degrees.