Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2020154971 - DISPOSITIF ÉLECTRONIQUE ET PROCÉDÉ DE COMMANDE ASSOCIÉ

Document

Description

Title of Invention 0001   0002   0003   0004   0005   0006   0007   0008   0009   0010   0011   0012   0013   0014   0015   0016   0017   0018   0019   0020   0021   0022   0023   0024   0025   0026   0027   0028   0029   0030   0031   0032   0033   0034   0035   0036   0037   0038   0039   0040   0041   0042   0043   0044   0045   0046   0047   0048   0049   0050   0051   0052   0053   0054   0055   0056   0057   0058   0059   0060   0061   0062   0063   0064   0065   0066   0067   0068  

Claims

1   2   3   4   5   6   7   8   9   10   11   12   13   14   15   16   17   18   19   20  

Drawings

FIG 1   FIG 2   FIG 3   FIG 4   FIG 5  

Description

Title of Invention : ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR

BACKGROUND

Technical Field

[0001]
The present invention relates to an electronic device and a control method therefor.
[0002]
Related Art
[0003]
Augmented reality (AR) technology may realize real-time determination of shooting positions and angles of a camera and real-time addition of a graphic object to an image captured by the camera. Electronic devices such as Microsoft’s HoloLens may realize such AR technology. For example, when a user wears an electronic device on his/her body, e.g., when the user wears HoloLens on his/her head, the user may view a graphical operating interface provided by HoloLens and may control a cursor in the graphical operating interface by moving the head. Therefore, when the user desires to select a very small object such as a button in the graphical operating interface, he/she may need to accurately control movement of the head so as to move the cursor to the very small object he/she desires to select and maintain the cursor on the object. In addition, for some users who are unable to restrain trembles of their bodies (including heads) due to diseases or for other reasons, it may be difficult to perform such accurate control.
[0004]
SUMMARY
[0005]
The present invention is intended to address the foregoing and/or other problems and provide an electronic device and a control method therefor.
[0006]
According to an exemplary embodiment, an electronic device includes: a control unit and an operating environment generation unit configured to generate an operating environment for a user to operate the electronic device, where the operating environment generation unit includes: a tag generation unit configured to generate a tag in the operating environment; a graphic object generation unit configured to generate a graphic object in the operating environment; an active area generation unit configured to generate an active area in the operating environment corresponding to the graphic object, where an area in the operating environment occupied by the active area overlaps an area in the operating environment occupied by the graphic object corresponding to the active area, where the control unit is configured to activate the graphic object corresponding to the active area when the tag in the operating environment is positioned in the occupied area of the active area in the operating environment and outside the occupied area of the graphic object corresponding to the active area in the operating environment. Therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.
[0007]
The electronic device further includes a display unit configured to display the operating environment generated by the operating environment generation unit, the tag generated by the tag generation unit, and the graphic object generated by the graphic object generation unit. For example, the display unit may not display the active area. In other words, the active area may be invisible to the user.
[0008]
The active area generation unit is configured to generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
[0009]
The control unit is configured to determine whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and control the active area generation unit to generate the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state. Accordingly, generation of the active area, e.g., the peripheral active area portion may be controlled according to the operating logic. Therefore, it is possible to avoid faulty operations caused by the peripheral active area portion while saving computing power.
[0010]
The electronic device further includes a gesture sensing unit configured to sense a gesture of the user operating the electronic device and send gesture information on the sensed gesture of the user to the control unit. The control unit is configured to determine the gesture of the user according to the gesture information sensed by the gesture sensing unit and determine a position of the tag in the operating environment according to the gesture of the user.
[0011]
In the case that the position of the tag in the operating environment is in an area of the graphic object in the operating environment and thus the graphic object is activated, when the control unit determines that the gesture of the user is a deactivation gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment. Therefore, the graphic object may be deactivated even if the gesture of the user is such that the tag leaves the graphic object but remains in the peripheral active area portion corresponding to the graphic object.
[0012]
The graphic object generation unit is configured to generate a first graphic object and a second graphic object adjacent to each other in the operating environment, where in the case that the first graphic object is activated, when the control unit determines that the gesture of the user is an object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the first graphic object and activates the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion. In addition, in the case that the first image object is activated, when the control unit determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit maintains the activation of the first graphic object.
[0013]
The electronic device is an augmented reality (AR) equipment. For example, the electronic device includes a body including the operating environment generation unit and the control unit; and a head-mounted component which is configured to house the body and may be worn on a head of the user operating the electronic device.
[0014]
According to another exemplary embodiment, a method for controlling an electronic device is provided, the method including: generating, in an operating environment for a user to operate the electronic device, an active area corresponding to a graphic object in the operating environment, where the active area overlaps the graphic object corresponding to the active area; and activating the graphic object corresponding to the active area when a tag is positioned in the active area and outside the graphic object corresponding to the active area.
[0015]
The step of generating the active area includes generating the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
[0016]
The step of generating the active area includes determining whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and generating the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
[0017]
The method further includes: sensing a gesture of the user operating the electronic device to obtain gesture information on the sensed gesture; and determining the gesture of the user according to the gesture information and determining a position of the tag in the operating environment according to the gesture of the user.
[0018]
The method further includes: in the case that the position of the tag in the operating environment is in an area of the graphic object in the operating environment and thus the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the gesture information sensed by the gesture sensing unit, deactivating the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment.
[0019]
The step of generating the graphic object includes generating a first graphic object and a second graphic object adjacent to each other in the operating environment; and the method further includes: in the case that the first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, deactivating the first graphic object and activating the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
[0020]
The method further includes generating the operating environment for the user to operate the electronic device; generating the tag in the operating environment; and generating the graphic object in the operating environment.
[0021]
According to another exemplary embodiment, an electronic equipment is provided, the electronic equipment including at least one processor; and a memory connected to the at least one processor, the memory having instructions stored therein which when executed by the at least one processor cause the electronic equipment to perform the method as described above.
[0022]
According to another exemplary embodiment, a non-transitory machine readable medium is provided, where the non-transitory machine readable medium has computer executable instructions stored thereon which when executed cause at least one processor to perform the method as described above.
[0023]
According to another exemplary embodiment, a computer program is provided, where the computer program includes computer executable instructions which when executed cause at least one processor to perform the method as described above.
[0024]
According to the exemplary embodiments, therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.

BRIEF DESCRIPTION OF THE DRAWINGS

[0025]
The following figures are intended to give schematic illustrations and explanations of the present invention but are not intended to limit the scope of the present invention. In the drawings:
[0026]
FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment;
[0027]
FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment;
[0028]
FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment;
[0029]
FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment;
[0030]
FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment.
[0031]
Reference signs in the drawings:
[0032]
100 control unit;
[0033]
300 operating environment generation unit;
[0034]
500 display unit;
[0035]
700 gesture sensing unit.

DETAILED DESCRIPTION

[0036]
Specific embodiments of the present invention are now described with reference to the drawings for a clearer understanding of the technical features, objectives and effects of the present invention.
[0037]
FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 1, the electronic device according to the exemplary embodiment includes a control unit 100 and an operating environment generation unit 300.
[0038]
The control unit 100 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a programmable logic controller (PLC) , etc. The control unit 100 may implement functions and operations that will be described in detail below by execution of particular programs or codes.
[0039]
The operating environment generation unit 300 may generate an operating environment for a user to operate the electronic device, e.g., a virtual operating environment. Such operating environment may provide a virtual operating space for the user, may include a preset operating logic, and may incorporate functions of the electronic device. Therefore, the user may control and use the electronic device by performing operations in such operating environment. Here, the operating environment generation unit 300 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a graphics processing unit (GPU) , a programmable logic controller (PLC) , etc. In one exemplary embodiment, the operating environment generation unit 300 may be implemented with the control unit 100 by one or more general-purpose or special-purpose processing devices. The operating environment generation unit 300 may implement functions and operations that will be described in detail below by execution of particular programs or codes.
[0040]
In one exemplary embodiment, the electronic device may be implemented as an augmented reality (AR) device. As such, the augmented reality device may include a camera (not shown) to capture surroundings around the user and may include a display unit 500 (which will be described in detail below) to display the captured surroundings in real time. Also, the augmented reality device may display the operating environment generated by the operating environment generation unit 300 and the captured surroundings together on the display unit 500.
[0041]
Additionally, in another exemplary embodiment, the electronic device may be implemented as a wearable electronic device. For example, the electronic device may be worn by the user on his/her head. As shown in FIG. 1, the electronic device may include a body 10 and a head-mounted component 30. The body 10 may include the control unit 100, the operating environment generation unit 300, and/or the display unit 500. The head-mounted component 30 may house the body and may be worn on a head of the user operating the electronic device. For example, as shown in FIG. 1, the head-mounted component 30 may include a headband that may encircle the head.
[0042]
The operating environment generated by the operating environment generation unit 300 may include a tag and a graphic object that may be activated by the tag. To this end, the operating environment generation unit 300 may include a tag generation unit 310, a graphic object generation unit 330, and an active area generation unit 350.
[0043]
The tag generation unit 310 may generate a tag in the operating environment, e.g., a cursor as shown in FIG. 1. In one exemplary embodiment, for the tag generated by the tag generation unit 310, its position in the operating environment may be changed according to a gesture of the user, which will be described in more detail below.
[0044]
The graphic object generation unit 330 may generate the graphic object in the operating environment, as shown in A and B in FIG. 1. The graphic object may be defined to incorporate predetermined functions or purposes according to a predetermined operating logic of the operating environment. In one exemplary embodiment, a graphic object A and a graphic object B may be defined as "buttons" , and thus triggering (e.g., "pressing" ) of the button A and the button B may be defined to perform different functions. For example, the button B may be defined as a "next step" button. That is, when the button B is "pressed" , a next step of an ongoing program in the operating environment is performed. Similarly, the button A may be defined as a "cancel" button. That is, when the button A is "pressed" , an ongoing program in the operating environment is canceled or stopped.
[0045]
The active area generation unit 350 may generate an active area in the operating environment corresponding to the graphic object. The active area may correspond to the graphic object. For example, the active area may overlap the graphic object corresponding to the active area. Here, when the user desires to perform a function corresponding to one graphic object (e.g., graphic object B) , he/she may control the cursor to move to the graphic object B. At this time, the cursor may be in the active area because the active area overlaps the graphic object B. When the control unit 100 determines that the cursor is in the active area, the control unit 100 may activate the graphic object B and may control the electronic device to provide a feedback for the user to inform the user of activation of the graphic object B (e.g., being "selected" ) . Here, the feedback may include a visual feedback (e.g., a change in a shape and/or color of the graphic object B) , an audible feedback (e.g., a warning tone) and/or a tactile feedback (e.g., a vibration) and/or a combination thereof. In this manner, when it is determined that the user has selected the graphic object B according to the feedback, the user may further trigger (e.g., "press" ) the graphic object B to perform a function corresponding to the graphic object. In other words, when the tag is in the active area, the graphic object corresponding to the active area may be activated, which will be described in detail below with reference to FIG. 2.
[0046]
In addition, the electronic device according to the exemplary embodiment may further include a display unit 500 to display the operating environment generated by the operating environment generation unit 300, the tag generated by the tag generation unit 310 and the graphic object generated by the graphic object generation unit 330. Moreover, the active area generated by the active area generation unit 350 may be invisible to the user. For example, the active area may not be displayed on the display unit 500.
[0047]
FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment. In FIG. 2, what is shown by a solid line is a graphic object B in an operating environment, and what is shown by a dashed line is an active area in the operating environment corresponding to the graphic object B. In particular, the active area generation unit 350 may generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion. The main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object. The peripheral active area portion may be located at the periphery of the main active area portion. For example, with reference to FIG. 2, the main active area corresponding to the graphic object B may be the same as the graphic object B, and is therefore represented by the solid line. In addition, the peripheral active area portion may be located between the solid line and the dashed line. Here, while the active area is described as including the main active area portion and the peripheral active area portion, it may be understood by a person skilled in the art that the main active area portion and the peripheral active area portion may be mutually independent active areas respectively corresponding to the graphic object, and alternatively, they may be a portion overlapping the graphic object and a portion at the periphery of the overlapped portion in the same active area.
[0048]
As stated above, the active area is a specific area in the operating environment. That is, when the tag is in the active area, the graphic object corresponding to the active area is activated. Therefore, in the case that the active area generated by the active area generation unit 350 includes the main active area portion and the peripheral active area portion, when the tag is at the graphic object, the tag may be in the main active area portion and therefore may activate the graphic object; and when the tag is outside the graphic object but remains in the peripheral active area portion as shown in FIG. 2, the graphic object may also be activated. In other words, when the control unit 100 determines that the tag in the operating environment is positioned in the active area and outside the graphic object corresponding to the active area, the control unit 100 my activate the graphic object corresponding to the active area. Therefore, the active area including the peripheral active area portion may allow the user to activate the graphic object by only positioning the tag close to the graphic object without necessarily positioning the tag at the graphic object. Therefore, when it is difficult for the user to accurately control the tag to arrive at and/or keep at the graphic object as a result of a relatively small size of the graphic object or a disease or for other reasons, the user may still easily activate the graphic object.
[0049]
In another exemplary embodiment, the control unit 100 may control the active area generation unit 330 to generate the active area. In particular, the control unit 100 may determine whether the graphic object is in a to-be-activated state according to a current operating logic of the operating environment. When the graphic object is determined to be in the to-be-activated state, the control unit 100 may control the active area generation unit 300 to generate the active area, e.g., the peripheral active area portion for the graphic object in the to-be-activated state. For example, in the case that an installation program is being executed in the operating environment, the control unit 100 may determine that the graphic object B (i.e., the "next step" button) may be in the to-be-activated state according to an operating logic of the installation program being executed. In this case, the control unit 100 may control the active area generation unit 330 to generate the active area including the peripheral active area portion for the graphic object B.
[0050]
Returning to FIG. 1, the electronic device may further include a gesture sensing unit 700. The gesture sensing unit 700 may sense a gesture of the user operating the electronic device. Here, the gesture of the user may include motions of various body parts of the user. For example, when the electronic device is implemented as a device for wearing on the user’s head, such as Microsoft’s HoloLens, the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves with the user’s head, as the gesture of the user. To this end, the gesture sensing unit 700 may include various sensors, such as an acceleration sensor, a geomagnetic sensor, a gyroscope, etc.
[0051]
The gesture sensing unit 700 may send the sensed gesture information on the gesture of the user to the control unit 100. Upon receiving the gesture information sensed by the gesture sensing unit 700, the control unit 100 may determine the gesture of the user according to the sensed gesture information and may determine the position of the tag in the operating environment according to the determined gesture of the user. For example, the control unit 100 may generate a tag position control command according to the determined gesture of the user, and send the tag position control command to the tag generation unit 310. The tag generation unit 310 may change the position of the tag in the operating environment according to the tag position control command. For example, when the electronic device is implemented as a device for wearing on the user’s head, such as Microsoft’s HoloLens, the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves as the user turns his/her head to the left and send the sensed gesture information on the left turning of the user’s head to the control unit. Here, the gesture information includes an acceleration of the motion of the user’s head, a duration of the motion, etc. Then, the control unit 100 may determine that the gesture of the user is left turning of the head according to the received gesture information, and therefore may generate a tag position control command for controlling the tag to move to the left. Then, the control unit 100 may send the command to the tag generation unit 310. The tag generation unit 310 may control the tag to move to the left in the operating environment according to the command, thus for example position the tag at the peripheral active area portion of the active area corresponding to a graphic object. At this time, the control unit 100 may activate the graphic object corresponding to the peripheral active area based on the fact that the tag is positioned at the peripheral active area portion, as shown in FIG. 2.
[0052]
According to another exemplary embodiment, the control unit 100 may control the operating environment according to the specific gesture of the user sensed by the gesture sensing unit 700. Here, the specific gesture of the user may include a deactivation gesture. That is, when the tag is positioned at the graphic object and thus the graphic object is activated, the user may perform a first motion within a time less than a first predetermined time, such that the gesture sensing unit 700 may sense such gesture of the user and send gesture information on such sensed deactivation gesture to the control unit 100, and the control unit 100 may thus control the tag generation unit 310 to position the tag outside the graphic object. For example, in the exemplary embodiment described above with reference to FIG. 2, the user may move his/her head to the left quickly within a time less than 1 second, such that the tag is moved out of the graphic object with such motion of the user. Here, the first predetermined time may be 1 second, and the first motion may be left turning of the head with an amplitude sufficient to move the tag out of the graphic object. At this time, the control unit 100 may determine that the gesture of the user is the deactivation gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700 and information which indicates that the tag generation unit 310 causes the tag to change its position as a function of the gesture. Therefore, the control unit 100 may deactivate the graphic object according to the user’s deactivation gesture even if the tag is positioned in the peripheral active area portion of the active area.
[0053]
FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment. According to the exemplary embodiment as shown in FIG. 3, the control unit 100 may switch activation of one graphic object to activation of another graphic object according to a specific object switching gesture of a user. As shown in FIG. 3, a graphic object generation unit 330 may generate a first graphic object A and a second graphic object B adjacent to each other, and the second graphic object B is initially in an activated state. At this time, the user may perform a second motion within a time less than a second predetermined time in a direction towards the first graphic object with an amplitude greater than a predetermined amplitude, and perform a third motion immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion. Here, the second predetermined time and the third predetermined time may be 0.5 seconds, the second motion may be left turning of a head with an amplitude of approximately 30° in a direction towards the first graphic object A, and the third motion may be right turning of the head with an amplitude of approximately 30° in a direction opposed to the direction of the second motion. The control unit 100 may determine that the user’s gesture is the object switching gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700, and thus may deactivate the second graphic object B and may activate the first graphic object A. In other words, when the user quickly turns his/her head from a currently activated graphic object in a direction towards a graphic object he/she desires to activate and quickly resume an initial position of the head immediately after this, the control unit 100 may deactivate the current graphic object and activate the graphic object the user desires to activate. Alternatively, if the control unit 100 determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit 700, the control unit 100 may maintain the activated state of the currently activated graphic object.
[0054]
A method for controlling an electronic device according to an exemplary embodiment will be described below. Such control method may be performed by the electronic device as described in the above exemplary embodiments, and therefore, repetitive descriptions of the same technical features are omitted here.
[0055]
FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment. As shown in FIG. 4, at operation S410, an active area may be generated for a graphic object in an operating environment of the electronic device. Here, the operating environment may be accessed by a user so that the user operates the electronic equipment. For example, the operating environment may be a visualized operating environment, and in such example, the electronic device may be implemented as an augmented reality (AR) device. As such, the control method may further include steps of generating the operating environment for the user to operate the electronic device, generating a tag (e.g., a cursor) in the operating environment, and generating a graphic object in the operating environment.
[0056]
In particular, the active area may overlap the graphic object corresponding to the active area. For example, the active area corresponding to the graphic object may be generated as one that includes a main active area portion and a peripheral active area portion. In such case, the main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object. The peripheral active area portion may be positioned at the periphery of the main active area portion, and therefore does not overlap the corresponding graphic object.
[0057]
Then, at operation S430, the graphic object corresponding to the active area may be activated when the tag (e.g., a cursor) in the operating environment is positioned in the active area and outside the graphic object corresponding to the active area. In other words, when the tag is positioned in the peripheral active area portion of the active area, the graphic object may still be activated even if the tag is not positioned at the graphic object at this time.
[0058]
In another exemplary embodiment, the active area including the peripheral active area portion may be generated for the graphic object in a specific situation. For example, whether the graphic object is in a to-be-activated state may be determined according to an operating logic of the operating environment, and the peripheral active area portion may be generated for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
[0059]
In the exemplary embodiment described above, the position of the tag in the operating environment may be changed according to a gesture of the user operating the electronic device. As such, the control method according to the exemplary embodiment may further include sensing the gesture of the user operating the electronic device to obtain gesture information on the sensed gesture, and may determine the gesture of the user according to the gesture information and therefore determine the position of the tag in the operating environment according to the gesture of the user.
[0060]
In addition, activation of the graphic object may be controlled according to the gesture of the user. For example, in the case that the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the gesture information, the graphic object may be deactivated. Here, the deactivation gesture may include a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment. In one example, the first predetermined time may be 1 second.
[0061]
In addition, activation of different graphic objects may be switched according to the gesture of the user. For example, in the case that a first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, the first graphic object may be deactivated, and a second graphic object may be activated. Here, the object switching gesture may include a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude, and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
[0062]
An electronic device and a control method therefor are described above with reference to FIGS. 1-4, and the method may be implemented in hardware, software or a combination of hardware and software. FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment. According to a current exemplary embodiment, the electronic equipment 1000 may include at least one processor 1010 and a memory 1030. The processor 1010 may execute at least one computer readable instruction (i.e., an element implemented in the form of software as described above) stored or encoded in a computer readable storage medium (i.e., the memory 1030) .
[0063]
In one embodiment, computer executable instructions are stored in the memory 1030 which when executed cause the at least one processor 1010 to implement or perform the method described above with reference to FIG. 4.
[0064]
It should be understood that the computer executable instructions stored in the memory 1030 when executed cause the at least one processor 1010 to perform various operations and functions described above in various embodiments with reference to FIGS. 1-4.
[0065]
According to one embodiment, a program product such as a non-transitory machine readable medium is provided. The non-transitory machine readable medium may have instructions (i.e., elements implemented in the form of software as described above) which when executed by a machine causes the machine to execute various operations and functions described above in various embodiments of the present application with reference to FIGS. 1-4.
[0066]
According to one embodiment, a computer program is provided, including computer executable instructions which when executed cause at least one processor to execute various operations and functions as described above in various embodiments of the present application with reference to FIGS. 1-4.
[0067]
It should be understood that while the present Description is illustrated according to various embodiments, not every embodiment merely includes one independent technical solution, and such manner of illustration in the Description is only for clarity. A person skilled in the art should consider the Description as a whole, and technical solutions in various embodiments may also be combined as appropriate to form other implementations that may be understood by a person skilled in the art.
[0068]
The foregoing is merely exemplary implementations of the present invention, but is not intended to limit the scope of the present invention. All equivalent variations, modifications and combinations made by any person skilled in the art without departing from concepts and principles of the present invention should fall within the protection scope of the present invention.

Claims

[Claim 1]
An electronic device, comprising a control unit (100) and an operating environment generation unit (300) configured to generate an operating environment for a user to operate the electronic device, wherein the operating environment generation unit comprises: a tag generation unit (310) configured to generate a tag in the operating environment; a graphic object generation unit (330) configured to generate a graphic object in the operating environment; an active area generation unit (350) configured to generate an active area in the operating environment corresponding to the graphic object, wherein the active area overlaps the graphic object corresponding to the active area, wherein the control unit is configured to activate the graphic object corresponding to the active area when the tag is positioned in the active area and outside the graphic object corresponding to the active area.
[Claim 2]
The electronic device according to claim 1, further comprising: a display unit (500) configured to display the operating environment generated by the operating environment generation unit and the graphic object generated by the graphic object generation unit.
[Claim 3]
The electronic device according to claim 1, wherein the active area generation unit is configured to generate the active area corresponding to the graphic object as one that comprises a main active area portion and a peripheral active area portion, wherein the main active area portion has a shape the same as that of the corresponding graphic object and overlaps the corresponding graphic object, and the peripheral active area portion is located at the periphery of the main active area portion.
[Claim 4]
The electronic device according to claim 3, wherein the control unit is configured to determine whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and control the active area generation unit to generate the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
[Claim 5]
The electronic device according to claim 1, further comprising: a gesture sensing unit (700) configured to sense a gesture of the user operating the electronic device and send gesture information on the sensed gesture of the user to the control unit, wherein the control unit is configured to determine the gesture of the user according to the gesture information sensed by the gesture sensing unit and determine a position of the tag in the operating environment according to the gesture of the user.
[Claim 6]
The electronic device according to claim 5, wherein in the case that the tag is in the graphic object and thus the graphic object is activated, when the control unit determines that the gesture of the user is a deactivation gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the graphic object, wherein the deactivation gesture comprises a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the tag is changed from being positioned in the graphic object to being positioned outside the graphic object.
[Claim 7]
The electronic device according to claim 5, wherein the graphic object generation unit is configured to generate a first graphic object and a second graphic object adjacent to each other in the operating environment, wherein in the case that the first graphic object is activated, when the control unit determines that the gesture of the user is an object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the first graphic object and activates the second graphic object, wherein the object switching gesture comprises a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
[Claim 8]
The electronic device according to claim 7, wherein in the case that the first image object is activated, when the control unit determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit maintains the activation of the first graphic object.
[Claim 9]
The electronic device according to claim 1, wherein the electronic device is an augmented reality equipment.
[Claim 10]
The electronic device according to claim 1, wherein the electronic device comprises: a body (10) comprising the operating environment generation unit and the control unit; and a head-mounted component (30) which is configured to house the body and is capable of being worn on a head of the user operating the electronic device.
[Claim 11]
A method for controlling an electronic device, comprising: generating, in an operating environment for a user to operate the electronic device, an active area corresponding to a graphic object in the operating environment, wherein the active area overlaps the graphic object corresponding to the active area; activating the graphic object corresponding to the active area when a tag is positioned in the active area and outside the graphic object corresponding to the active area.
[Claim 12]
The method according to claim 11, wherein the step of generating the active area comprises generating the active area corresponding to the graphic object as one that comprises a main active area portion and a peripheral active area portion, wherein the main active area portion has a shape the same as that of the corresponding graphic object and overlaps the corresponding graphic object, and the peripheral active area portion is located at the periphery of the main active area portion.
[Claim 13]
The method according to claim 12, wherein the step of generating the active area comprises determining whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and generating the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
[Claim 14]
The method according to claim 11, further comprising: sensing a gesture of the user operating the electronic device to obtain gesture information on the sensed gesture; determining the gesture of the user according to the gesture information and determining a position of the tag in the operating environment according to the gesture of the user.
[Claim 15]
The method according to claim 14, further comprising: in the case that the tag is in the graphic object and thus the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the sensed gesture information, deactivating the graphic object, wherein the deactivation gesture comprises a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the tag is changed from being positioned in the graphic object to being positioned outside the graphic object.
[Claim 16]
The method according to claim 14, wherein the step of generating the graphic object comprises generating a first graphic object and a second graphic object adjacent to each other in the operating environment; the method further comprises: in the case that the first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, deactivating the first graphic object and activating the second graphic object, wherein the object switching gesture comprises a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
[Claim 17]
The method according to claim 11, further comprising: generating the operating environment for the user to operate the electronic device; generating the tag in the operating environment; generating the graphic object in the operating environment.
[Claim 18]
An electronic equipment, comprising: at least one processor; and a memory connected to the at least one processor, the memory having instructions stored therein which when executed by the at least one processor cause the electronic equipment to perform the method according to any of claims 11-17.
[Claim 19]
A non-transitory machine readable medium, having computer executable instructions stored thereon which when executed cause at least one processor to perform the method according to any of claims 11-17.
[Claim 20]
A computer program, comprising computer executable instructions which when executed cause at least one processor to perform the method according to any of claims 11-17.

Drawings

[ Fig. FIG 1]  
[ Fig. FIG 2]  
[ Fig. FIG 3]  
[ Fig. FIG 4]  
[ Fig. FIG 5]