Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2020111346 - PROCÉDÉ, SYSTÈME ET DISPOSITIF D'INTERACTION POUR INFORMATION

Document

Description

Title of Invention

Technical Field

1  

Background Art

2   3   4  

Disclosure of Invention

Technical Problem

5  

Solution to Problem

6   7   8   9   10   11   12   13   14   15   16   17   18   19  

Advantageous Effects of Invention

20  

Brief Description of Drawings

21   22   23   24   25   26   27   28   29   30   31   32   33   34   35   36   37   38   39   40   41   42   43   44   45   46   47   48   49  

Mode for the Invention

50   51   52   53   54   55   56   57   58   59   60   61   62   63   64   65   66   67   68   69   70   71   72   73   74   75   76   77   78   79   80   81   82   83   84   85   86   87   88   89   90   91   92   93   94   95   96   97   98   99   100   101   102   103   104   105   106   107   108   109   110   111   112   113   114   115   116   117   118   119   120   121   122   123   124   125   126   127   128   129   130   131   132   133   134   135   136   137   138   139   140   141   142   143   144   145   146   147   148   149   150   151   152   153   154   155   156   157   158   159   160   161   162   163   164   165   166   167   168   169   170   171   172   173   174   175   176   177   178   179   180   181   182   183   184   185   186   187   188   189   190   191   192   193   194   195   196   197   198   199   200   201   202   203   204   205   206   207   208   209   210   211   212   213   214   215   216   217   218   219   220   221   222   223   224   225   226   227   228   229   230   231   232   233   234   235   236   237   238   239   240   241   242   243   244   245   246   247   248   249   250   251   252   253   254   255   256   257  

Claims

1   2   3   4   5   6   7   8   9   10   11   12   13   14   15   16   17   18   19   20   21  

Drawings

1   2   3   4   5   6   7   8   9   10   11   12   13   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28   29  

Description

Title of Invention : INTERACTION METHOD, SYSTEM, AND DEVICE FOR INFORMATION

Technical Field

[1]
The application relates to computer network technology, and in particular, to interaction method, system, and device for information.

Background Art

[2]
With the development of computer technology, information interaction or/and information synchronization may be performed between devices in a plurality of manners, and the devices may be wireless terminals or the like. At present, methods for information interaction or/and that for information synchronization mainly include: bluetooth transmission between devices, transmission between devices through near field communication (NFC) , transmission between devices through a local network area which consists of the devices, and transmission between devices by means of WiFi or a mirror link technology. The mirror link technology is combined with a plurality of existing technologies to meet various possible internal usage scenarios of an automobile, including displacement of a screen and inputting of user instruction by means of virtual network computing, and searching for a corresponding device and completing a correct pre-set configuration through universal plug and play, information interaction such as audio streaming by means of Bluetooth and real time transport protocols. The mirror link technology also supports various technologies, such as Bluetooth, HFP and A2DP protocols, which are used in automobiles currently and usually. That is to say, a device may be connected to an in-vehicle system through a universal serial bus (USB), Bluetooth or WiFi. And the device may transmit an operation interface to an in-vehicle screen, to form a simple and clear menu interface on the in-vehicle screen, and the device is operated by physical buttons on the in-vehicle system or language commands.
[3]
In order to obtain desired information from other devices, a device first needs to identify information from the other devices. The information may be identified by a camera recognition technology, a text and number recognition technology or a photosensitive recognition technology. Wherein, information of text and image having a particular attribute, for example, a telephone number, a tracking number, a zip code, a website address, and a QR code may be recognized through the file and number recognition technology. A sensor may be implanted under the screen glass of a screen of a device, which is required in the photosensitive recognition technology. This type of sensor may capture changes in light and light waves. Based on the changes in light and light waves, a blocking relationship between screens of devices or between a terminal device and an object may be recognized, and information such as the position of the shielded area and the size of a screen pixel of the screen of the device may be learned, and thereby sharing and processing information in the shielded area may be performed through the photosensitive recognition technology.
[4]
Compared with the camera recognition technology, the photosensitive recognition technology is performed through obtaining information of a device by blocking the screen of the device. The operation of the recognition technology is simple, which does not need a user to perform tapping into a photographing program step by step to record information and image. The image shot by a camera is regular, and the screen area of a device can be blocked by an object of any shape, so that the obtained shielded area is within a range of irregular shape. This type of recognition is freer and more interesting. What is obtained through the photosensitive recognition technology is direct information data without imaging and light interference, and that obtained through the camera recognition technology is data of physical image format with light and interference of light and physical space distance.

Disclosure of Invention

Technical Problem

[5]
However, at present, the photosensitive recognition technology has not been applied to information interaction between devices. How to implement the information interaction between devices based on the photosensitive recognition technology specifically is a technical problem that needs to be solved urgently.

Solution to Problem

[6]
In view of this, an embodiment of the present application provides an interaction method for information. Information interaction between devices based on a photosensitive recognition technology may be implemented through this method.
[7]
An embodiment of the present application further provides an interaction system for information. Information interaction between devices based on a photosensitive recognition technology may be implemented through this system.
[8]
An embodiment of the present application further provides an interaction device for information. Information interaction between devices based on a photosensitive recognition technology may be implemented through this device.
[9]
According to the foregoing objectives, the present application is implemented as follows.
[10]
An interaction method for information, comprising:
[11]
recognizing, by a first device with a light-sensing screen, a non-transparent shielding object or a second device; and
[12]
transmitting, by the first device with a light-sensing screen, shielded screen information to the second device for processing.
[13]
An interaction system for information, wherein the system comprises a first device with a light-sensing screen and a second device, wherein
[14]
the first device with a light-sensing screen is configured to: recognize a non-transparent shielding object or the second device, and transmit shielded screen information to the second device; and
[15]
the second device is configured to process the screen information.
[16]
An interaction device for information, comprising: an recognition module and a transmission module, wherein
[17]
the recognition module is configured to recognize a non-transparent shielding object or a second device; and
[18]
the transmission module is configured to transmit shielded screen information to the second device.
[19]
It can be learned from the foregoing solutions that, after recongnizing a non-transparent shielding object or a second device, a first device with a light-sensing screen provided in the embodiments of the present application transmits shielded screen information to the second device for processing. One example is that the second device sends an operation instruction to the first device with a light-sensing screen according to a use scenario of a user, to change the screen information stored or/and displayed in the first device with a light-sensing screen. In this way, interaction for information according to user requirements based on a photosensitive recognition technology between different devices may be implemented in the present application.

Advantageous Effects of Invention

[20]
An embodiment of the present application provides an interaction method for information. Information interaction between devices based on a photosensitive recognition technology may be implemented.

Brief Description of Drawings

[21]
FIG. 1 is a flowchart of an interaction method for information according to an embodiment of the present application;
[22]
FIG. 2 is a schematic diagram of a structural of an interaction system for information according to an embodiment of the present application;
[23]
FIG. 3 is a schematic diagram of a structural of an interaction device 1 for information according to an embodiment of the present application;
[24]
FIG. 4 is a schematic diagram of a structural of an interaction device 2 for information according to an embodiment of the present application;
[25]
FIG. 5 is a schematic diagram of a method for processing performed by a connection module in a first device having a light-sensing screen according to an embodiment of the present application;
[26]
FIG. 6 is a flowchart of a method for processing performed by a recognition module in a first device having a light-sensing screen according to an embodiment of the present application;
[27]
FIG. 7 is a flowchart of a method for processing performed by a transmission module according to an embodiment of the present application;
[28]
FIG. 8 is a flowchart of a method for processing performed by a secondary display module according to an embodiment of the present application;
[29]
FIG. 9 is a flowchart of a method for processing performed by a processing module according to an embodiment of the present application;
[30]
FIG. 10 is a schematic diagram of an example 1 in which a first device having a light-sensing screen is blocked according to an embodiment of the present application;
[31]
FIG. 11 is a schematic diagram of an example 2 in which a first device having a light-sensing screen is blocked according to an embodiment of the present application;
[32]
FIG. 12 is a schematic diagram of an example 3 in which a first device having a light-sensing screen is blocked according to an embodiment of the present application;
[33]
FIG. 13 is a schematic diagram of an example in which a first device having a light-sensing screen is blocked by a paper card and screen information is displayed on an intelligent device according to an embodiment of the present application;
[34]
FIG. 14 is a schematic diagram of an example in which a first device having a light-sensing screen is blocked by a pen and screen information is displayed on a smart watch according to an embodiment of the present application;
[35]
FIG. 15 is a schematic diagram of an example in which a first device having a light-sensing screen is blocked by a hand and information on the screen of the device is displayed on an intelligent device according to an embodiment of the present application;
[36]
FIG. 16 is a schematic diagram of an entrance for operation of a smartphone according to an embodiment of the present application;
[37]
FIG. 17 is a schematic diagram of a process of an example in which text information is recognized according to an embodiment of the present application;
[38]
FIG. 18 is a schematic diagram of a process of an example in which a picture is recognized and intercepted according to an embodiment of the present application;
[39]
FIG. 19 is a schematic diagram of a process of an example in which address information is recognized according to an embodiment of the present application;
[40]
FIG. 20 is a schematic diagram of a process of an example in which a QR code is recognized according to an embodiment of the present application;
[41]
FIG. 21 is a schematic diagram of a process of an example in which website address information is recognized according to an embodiment of the present application;
[42]
FIG. 22 is a schematic diagram of a process of an example in which telephone number information is recognized according to an embodiment of the present application;
[43]
FIG. 23 is a schematic diagram of a process of an example in which a cursor is recognized and text is edited according to an embodiment of the present application;
[44]
FIG. 24 is a schematic diagram of a process of an example in which editing is performed in a highlighting area in a text according to an embodiment of the present application;
[45]
FIG. 25 is a schematic diagram of a process of an example in which information is copied and pasted according to an embodiment of the present application;
[46]
FIG. 26 is a schematic diagram of a process of an example in which information in a area is deleted according to an embodiment of the present application;
[47]
FIG. 27 is a schematic diagram of a process of an example of projecting a screen securely according to an embodiment of the present application;
[48]
FIG. 28 is a schematic diagram of a process of an example in which private content is encrypted according to an embodiment of the present application; and
[49]
FIG. 29 is a schematic diagram of a process of an example of securely logging in to a personal account according to an embodiment of the present application.

Mode for the Invention

[50]
In order to make the objectives, technical solutions and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments.
[51]
As can be learned from the background, although a light-sensing screen is provided, the light-sensing screen and an interactive operation based on photographic properties of the light-sensing screen are not developed. In the background, portable interaction for quickly obtaining information by means of blocking with an opaque object or overlapping of screens of devices is not implemented. In the background, secondary processing for information after the information has been obtained by one screen from another screen, such as information extraction and displacement of reconstructed information, is not implemented. Transmission of natural screen regionalized information is not implemented in the manner of device synchronization in the background, but the regionalized information on the screen may be recognized by embodiments of the present application, and a user may naturally select a screen area of a device in which synchronization of information is needed.
[52]
There is no very natural and portable interaction form in the manner of transmission between devices in the background, but the embodiments of the present application provide a particularly natural interaction manner in which a user uses a device to touch a screen of a device to establish a pairing or connection, thereby obtains information of other devices. The manner of obtaining information content greatly meets the visual and psychological expectations of the user.
[53]
The information sharing between devices in the background is usually sharing information of a whole-screen, and a user may not select naturally a screen area for information sharing. The interaction manner thereof is only limited to operating through a gesture on one device to send an instruction for sharing information or picture, and technical solution that information interaction of device is trigged by means of natural contact or blocking of a screen device or an object may not be implemented.
[54]
Information on a photo may not be recognized, filtered and extracted in the manner of photographing and memorizing information of a screen of a device through photographing in the background. A camera application needs to be opened, shooting by clicking needs to be performed, a picture needs to be stored, and key information required by a user needs to be searched in a process for photographing. The process is cumbersome.
[55]
The problem in information acquisition or synchronization in the background is information redundancy. For information synchronization between devices, information is not distinguished based on information requirements of a user and application scenarios, and information is synchronously displayed on the device regardless of the scenario. The user may not obtain information in a particular part of the screen in a customized manner. The information redundancy increases the cost of information management on each device, and also brings insecurity of information management.
[56]
According to embodiments of the present application, a photosensitive recognition technology of a device screen is used. After a provided first device with a light-sensing screen recognizes a non-transparent shielding object or a second device, the first device transmits shielded screen information to the second device for processing. For example, the second device sends an operation instruction to the first device with a light-sensing screen according to a using scenario of a user, so that the screen information stored or/and displayed in the first device with a light-sensing screen is modified. In this way, information interaction may be achieved between different devices according to user demands based on a light-sensing recognition technology.
[57]
Specifically, the first device with a light-sensing screen calculates the position and size of an shielded area according to the brightness sensation of the screen of the first device which is shielded by the second device or a non-transparent shielding object. In response to a confirmative operation action of the user, the first device with a light-sensing screen quickly shares the screen information to the second device. After obtaining the screen information of the first device with a light-sensing screen, the second device may perform different processes for the screen information according to user demands based on the obtained screen information.
[58]
In this way, the second device may quickly intercept and process the shielded screen information of the first device with a light-sensing screen, and reconstruct the obtained screen information and display it on the second device secondary. When it is inconvenient for the user to carry or open the first device having a light-sensing screen, the user only needs to view the second device. When the first device having a light-sensing screen is large and inconvenient to move and the second device is small and is convenient to move, the shielded screen information is quickly obtained for the purpose of memo, mobile storage and portable displacement. This is an operation and interaction experience that greatly meets the psychological expectation of the user.
[59]
According to a text attribute of the obtained screen information of the first device with a light-sensing screen, the user is provided with different operation prompts, such as highlighting the text of the first device with a light-sensing screen, and quickly inserting text and other operations. This is convenient for the user to quickly input text information on a device with smaller screen to the first device with a light-sensing screen. The first device with a light-sensing screen is equipped with a larger screen, and provides portability on editing screen information.
[60]
In the embodiments of the present application, the shielded screen information on the first device with a light-sensing screen may also be encrypted to hide the shielded screen information. Alternatively, the shielded screen information on the first device with a light-sensing screen is about information of an area for inputting password. After a password is input on the second device, the password is directly input to a shielded screen area of the first device with a light-sensing screen for verification processing.
[61]
In the embodiments of the present application, the first device with a light-sensing screen may include a mobile phone, a tablet computer, an electronic screen, a television, a computer, and the like, and is not limited to the foregoing devices. The second device may include a mobile terminal device such as a mobile phone or a tablet computer, and is not limited to the foregoing devices.
[62]
FIG. 1 shows an interaction method for information according to an embodiment of the present application. The interaction method for information includes the following specific steps.
[63]
In step 101: a first device with a light-sensing screen recognizes a non-transparent shielding object or a second device.
[64]
In step 102: The first device with a light-sensing screen transmits shielded screen information to the second device for processing.
[65]
In this method, before step 101, the first device with a light-sensing screen establishes a communication connection with the second device.
[66]
In this method, the screen information is data, text, or a picture.
[67]
In this method, processing performed by the second device further includes: displaying or/and storing on the second device.
[68]
When the screen information is a picture, processing performed by the second device includes: intercepting a covered picture, and then performing splicing processing.
[69]
In this method, processing performed by the second device includes: performing addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and then sending the processed screen information to the first device with a light-sensing screen for storage or/and for displacement in the shielded area.
[70]
In this method, the screen information is encrypted, and is not displayed or displayed in a fuzzy manner in the shielded area.
[71]
In this method, the screen information is a secure login interface, and processing performed by the second device includes:
[72]
filling a secure account in the secure login interface, and then sending the secure account to the first device with a light-sensing screen for verification and displacement in the shielded area.
[73]
FIG. 2 is a schematic diagram of a structural of an interaction system for information according to an embodiment of the present application. The interaction system for information includes a first device with a light-sensing screen and a second device. Wherein,
[74]
the first device with a light-sensing screen is configured to: recognize a non-transparent shielding object or the second device, and transmit shielded screen information to the second device. And
[75]
the second device is configured to process the screen information.
[76]
In this system, the first device with a light-sensing screen is further configured to establish a connection with the second device.
[77]
In this system, the second device is further configured to store or/and display the screen information.
[78]
In this system, the second device is further configured to: perform addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and then send the processed screen information to the first device with a light-sensing screen for storage or/and for displacement in a shielded area.
[79]
The first device with a light-sensing screen is further configured to: after receiving the processed screen information, store or/and display the processed screen information in the shielded area.
[80]
In this system, the screen information is data, text, or a picture.
[81]
When the screen information is a picture, the second device is further configured to: intercept a covered picture, and then perform splicing processing.
[82]
In this system, the second device is further configured to: perform encryption processing on the screen information, and send the processed screen information to the first device with a light-sensing screen; and
[83]
the first device with a light-sensing screen is further configured to: not display the screen information or display the screen information in a fuzzy manner in the shielded area.
[84]
In this system, the screen information is a secure login interface.
[85]
The second device is further configured to: fill a secure account in the secure login interface, and then send the secure account to the first device with a light-sensing screen for verification and displacement in the shielded area. And
[86]
the first device with a light-sensing screen is further configured to: verify the received secure account and display the secure account in the shielded area.
[87]
FIG. 3 is a schematic diagram of a structural of an interaction device 1 for information according to an embodiment of the present application. The device 1 is a first device having a light-sensing screen, including an recognition module and a transmission module. Wherein,
[88]
the recognition module is configured to recognize a non-transparent shielding object or a second device. And
[89]
the transmission module is configured to transmit shielded screen information to the second device.
[90]
In this device, a connection module is further included to establish a connection to the second device.
[91]
In this apparatus, the transmission module is further configured to: receive processed screen information on which addition, insertion, deletion, or an operation according to a pre-set operation instruction are performed by the second device, and then store the processed information or/and display the processed information by a display module in a shielded area.
[92]
In this device, the transmission module is further configured to receive encrypted screen information sent by the second device. And a display module does not display the screen information or display the screen information in a fuzzy manner in the shielded area.
[93]
In this device, the transmission module is further configured to receive a secure account sent by the second device, and display the secure account in a shielded area for verification, and the screen information of the shield area is about a secure login interface.
[94]
FIG. 4 is a schematic diagram of a structural of an interaction device 2 for information according to an embodiment of the present application. The device 2 is a second device including a secondary display module and a processing module. Wherein,
[95]
the secondary display module is configured to display shielded screen information sent by a first device with a light-sensing screen. And
[96]
the processing module is configured to process the shielded screen information sent by the first device with a light-sensing screen.
[97]
In this device, the processing module is further configured to store or/and display the screen information.
[98]
In this device, the screen information is a picture, text, or data. And
[99]
when the screen information is a picture, the device further includes a scanning module, which is configured to: perform mobile scanning on a screen shielding object which is above the first device with a light-sensing screen, and perform splicing processing on scanned information of the shielding object, to obtain screen information.
[100]
In this device, a connection module is further included, which is used for establishing a connection with the first device with a light-sensing screen.
[101]
In this device, the processing module is further configured to: perform addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and then send the processed screen information to the first device with a light-sensing screen.
[102]
In this device, the processing module is further configured to: perform encryption processing on the screen information, and send the processed screen information to the first device with a light-sensing screen.
[103]
In this device, the screen information is a secure login interface. And
[104]
the processing module is further configured to: fill a secure account in the secure login interface, and then send the secure account to the first device with a light-sensing screen.
[105]
The essence of this embodiment of the present application is that the user may extract the shielded screen information of the first device with a light-sensing screen by means of shielding, Thereby the user displays the shielded screen information on the second device. Transplanting an operation interface to another device is performed in the Mirror Link technology in the background, and regionalized filtration and extraction are not performed on content on the interface. In this embodiment of the present application, more differentiated scenario operations may be performed. Further, the shielding device may also perform operation control on the blocked device.
[106]
FIG. 5 is a schematic diagram of a method for processing performed by a connection module in a first device with a light-sensing screen according to an embodiment of the present application. The connection module is configured to detect whether the first device with a light-sensing screen and a second device are in a connected state. Connection is mainly used for: determining whether the second device is secure, only when the second device and the first device with a light-sensing screen are under secure connection authorization, subsequent information interaction may be performed. The specific steps are as follows.
[107]
In step 501: a distance is detected, and whether the distance between the first device with a light-sensing screen and the second device is within a connection distance.
[108]
In step 502: Whether a connection is established is determined, If not, step 503 will be performed; and if yes, process will be ended.
[109]
In step 503: establishing a connection with the second device is prompted.
[110]
In step 504: verification is performed to determine whether the connection is successfully established, if yes, step 505 will be performed; and if not, step 503 will be performed again.
[111]
In step 505: the connection is successfully established is prompted.
[112]
In step 506: connection information is stored.
[113]
In this connection module, the connection information is stored, so that verification may not need to be performed on a next connection for a user, which is convenient for user experience.
[114]
FIG. 6 is a flowchart of a method for processing of a recognition module in a first device with a light-sensing screen according to an embodiment of the present application. The first device with a light-sensing screen recognizes the relative object position or area size of a second device or a non-transparent shielding object through the screen of the first device. The second device is ready to obtain screen information of the first device with a light-sensing screen. A shielded screen area of the first device with a light-sensing screen is projected onto the second device. A user may move the second device or the non-transparent shielding object left and right to achieve precise positioning. The specific process includes following steps.
[115]
In step 601: A photosensitive element of a recognition module performs monitoring.
[116]
In step 602: The recognition module performs brightness detection of a screen area, to determine whether there is a dark area. If yes, step 603 will be performed; and If not, step 601 will be performed.
[117]
In step 603: information of the photosensitive element is obtained.
[118]
In step 604: position information of a shielded screen is obtained.
[119]
Specifically, because the first device with a light-sensing screen is equipped with a screen of a photosensitive attribute, the first device with a light-sensing screen is configured to recognize a position at which the second device or the non-transparent shielding object shields the screen of the first device thereof. When brightness of the screen monitored by the photosensitive element built in the recognition module is inconsistent, the size and the position of the corresponding screen area to be intercepted may be determined. Locating the position of the shielded screen may be performed by the existing technology.
[120]
Herein, locating the photosensitive position may be performed by the existing technology. The existing technology is described below. An optical sensor transistor in a photosensitive pixel in a photosensitive device is formed by an oxide semiconductor transistor for sensing light. The photosensitive device includes: an array of photosensitive pixel with a plurality of photosensitive pixels arranged in rows and columns; and a plurality of gate lines arranged along a row direction and supplying gate voltages to photosensitive pixels, respectively. Wherein, each photosensitive pixel includes an optical sensor transistor for sensing light and a switch transistor for outputting a photosensitive signal from the optical sensor transistor. A gate of an optical sensor transistor of a photosensitive pixel arranged in any row is connected to a gate line arranged in a row before or after the any row. Through the photosensitive pixel array and the plurality of gate lines involved in the present technology, locating the position of an area of a screen which is shielded by the object may be implemented.
[121]
A scanning module performs mobile scanning on the first device with a light-sensing screen below based on the second device or shielding object above, and splicing the scanned information of the shielding object to obtain screen information. Two steps are mainly included. In the first step, after the scanning mode is started, the photosensitive element in the first device with a light-sensing screen below continuously monitors the shielded area, and stores the current shielded area as a picture m once every k milliseconds. In the second step, after the scanning mode is ended, all the shielded areas collected in the first step are spliced.
[122]
For the first step, the current shielded area is reserved every k milliseconds, where the value of k is as small as possible. In theory, the smaller the value is, the effect is better. However, considering the splicing processing in the second step and the problem of performance, it is suggested that the value is 100 to 500 milliseconds.
[123]
The splicing processing in the second step may be performed by existing technology. For example, the stitcher class of opencv implements splicing, and the class provides methods such as createDefault, estimatedTransform, and composePanorama, so that splicing processing may be performed conveniently on multiple images to form a complete scenario picture.
[124]
FIG. 7 is a flowchart of a method for processing performed by a transmission module according to an embodiment of the present application. The transmission module sends screen information of an area overlapped with a second device or a non-transparent shielding object to the second device, and the screen information is stored in the second device. The main function of the module is similar to function of a lens. The module transmits an image of shielded area to the second device in real time, Which facilitates controlling the second device more clearly and easily. The specific steps are as follows.
[125]
In step 701: a sensing area of a photosensitive element in a device uniquely with a light-sensing screen is intercepted.
[126]
In step 702: after the sensing area is spliced, screen information is obtained, and is transmitted to a second device in a picture format.
[127]
In step 703: The second device prompts a user whether to intercept the picture. If yes, step 704 will be performed. If not, this process will be ended.
[128]
In step 704: The second device stores the information of the intercepted picture.
[129]
FIG. 8 is a flowchart of a method for processing of a secondary display module according to an embodiment of the present application. The secondary display module is used by a second device to perform secondary extraction on screen information obtained from a first device with a light-sensing screen, and redisplay the extracted screen information. The specific steps are as follows.
[130]
In step 801: screen information that has been stored is obtained, the screen information is a picture.
[131]
In step 802: prompt information selected by a user for display is received.
[132]
In step 803: whether to display the picture is determined. If yes, step 804 will be performed; and if not, step 805 will be performed.
[133]
In step 804: the screen information is continually stored, the screen information is the picture.
[134]
In step 805: a type of information selected by the user for extraction is received.
[135]
In step 806: Extraction for information is performed in screen information.
[136]
In the secondary display module, in addition to directly intercepting the picture, the user may also flexibly select required information of picture, such as pure text information or numerical information. The selected required information is performed by information recognition and processing, interface re-layout and content attribute definition, and then is stored and displayed on the second device.
[137]
FIG. 9 is a flowchart of a method for processing of a processing module according to an embodiment of the present application. Screen information is processed on a second device to facilitate user operations. Further, the processing of screen information may be divided into two categories: viewing and modification. Furthermore, by means of viewing, a telephone number, a QR code, and a website address may be identified, an express bill may be obtained, and the like. Furthermore, by means of modification, an intercepted area may be highlighted, a cursor of a text and modified text nay be recognized and synchronized to a first device with a light-sensing screen, the intercepted area may be pasted, deleted, and the like. In this method, security-related functions such as hiding special area information may be implemented.
[138]
In step 901: Determine whether a hidden mode is detected. If yes, step 902 will be performed; and if not, the process will be ended.
[139]
In step 902: hidden screen information is confirmed.
[140]
In step 903: the determined screen information is encrypted.
[141]
In step 904: The determined screen information is not displayed or displayed in a fuzzy manner.
[142]
Secure login may also be implemented according to this method. The secure login includes following steps.
[143]
In step 905: Determine whether a secure login mode is detected. If yes, step 906 will be performed; and otherwise, the process will be ended.
[144]
In step 906: prompt information of inputting a secure password is sent to a user.
[145]
In step 907: the security password input by the user is received.
[146]
In step 908: the secure password input by the user is verified.
[147]
In step 909: a verification result is displayed to the user.
[148]
Interaction scenarios for information according to embodiments of the present application are described below. The information interaction scenarios are mainly divided into three types. The first type is interception and conversion display of text information and picture information. The second type is supplementary input of information and covering for area instructions. And the third type is secure encryption of information of device.
[149]
The first type: interception and conversion display of text information and picture information
[150]
-text information is intercepted.
[151]
a second device intercepts character data such as a telephone number, a QR code, a website address, an address, a tracking number, a zip code, date information and weather information that are displayed on a first device with a light-sensing screen.
[152]
Manner 1: the second device shields the first device with a light-sensing screen.
[153]
A user places a screen of the second device upward on the first device with a light-sensing screen, and may shift the second device to read and recognize a wider range of text information. At this time, character data information of an overlapping area of the screens of the two devices may be displayed on the screen of the second device. The information of the attribute of the character information is recognized. Important information such as a telephone number is filtered and displayed on the screen of the second device. After the user confirms that the information recognized and displayed by the second device meets the requirement of the user, the user performs an operation of tapping the device to complete transmission and storage of the device information.
[154]
The specific steps are as follows.
[155]
In step 1: A second device is placed above a first device with a light-sensing screen, and the second device recognizes a relative physical position of a screen of the first device with a light-sensing screen.
[156]
Step 2: On a premise that the first device with a light-sensing screen establishes a secure connection to the second device, the first device with a light-sensing screen sends display information of an overlapping area of the screen of the first device to the second device.
[157]
Step 3: The second device stores the obtained display information in itself.
[158]
Step 4: The second device converts a manner of displaying of the stored display information, redisplays it to the user in an appropriate format and form, and further associates an operation instruction.
[159]
Step 5: The second device presents the information or further processes and operates the information according to the operation instruction input by the user.
[160]
Manner 2: any non-transparent shielding object shields the first device with a light-sensing screen
[161]
The user places a non-transparent shielding object of any shape above the first device with a light-sensing screen, and may shift the non-transparent shielding object to read and recognize a wider range of text information. At this time, character data information that is of the first device with a light-sensing screen which is shielded by the non-transparent shielding object may be displayed on the screen of the second device. And an attribute of the information of the character is recognized. And important information such as a telephone number is filtered and is display on the screen of the second device. After the user confirms that the information recognized by the second device and displayed on the second device meets the requirement of the user, the user performs an operation of tapping the device to complete transmission and storage of information of a text.
[162]
The specific steps are as follows.
[163]
In step 1: Any non-transparent shielding object is placed above a first device with a light-sensing screen, and the first device with a light-sensing screen recognizes a relative physical position and shape of the non-transparent shielding object.
[164]
In step 2: A secure connection is established between the first device with a light-sensing screen and a second device. The two do not need to overlap and touch each other, and the first device with a light-sensing screen transmits display information of an area shielded by the non-transparent shielding object to the second device.
[165]
In step 3: The second device stores the display information obtained by blocking of the opaque blocking object in the second device.
[166]
The second device converts a manner of displaying the stored display information, redisplays it to the user in an appropriate format and form, and further associates an operation instruction.
[167]
In step 5: The second device presents the information or further processes and operates the information according to the operation instruction input by the user.
[168]
-Interception is performed on image.
[169]
Interception for image refers to that a second device directly intercepts, in a screenshot form, information displayed on a first device with a light-sensing screen, and stores the information in a picture format. The user may view screen content of an area where the second device overlaps with the first device with a light-sensing screen. The position of an area where picture is intercepted is determined. Thereby the information is stored as a picture and the picture is directly displayed, which is independent of the first device with a light-sensing screen.
[170]
Manner 1: the second device shields the first device with a light-sensing screen.
[171]
In step 1: the second device is placed above the first device with a light-sensing screen. And the second device recognizes a relative physical position of a screen of the first device with a light-sensing screen.
[172]
In step 2: On a premise that the first device with a light-sensing screen establishes a secure connection with the second device, the first device with a light-sensing screen sends display information of an overlapping area of the screen to the second device in a screenshot form.
[173]
In step 3: The second device stores the information of the obtained picture in the second device.
[174]
In step 4: The second device displays the information of the stored picture to a user.
[175]
Manner 2: any non-transparent shielding object shields the first device with a light-sensing screen.
[176]
In step 1: Any non-transparent shielding object is placed above a first device with a light-sensing screen, and the first device with a light-sensing screen recognizes a relative physical position and shape of the non-transparent shielding object.
[177]
In step 2: A secure connection is established between the first device with a light-sensing screen and the second device, the two do not need to overlap and touch each other, and the first device with a light-sensing screen transmits display information of an area shielded by the non-transparent shielding object to the second device in a screenshot form.
[178]
In step 3: The second device stores information of the obtained picture in the second device.
[179]
In step 4: The second device displays the information of the stored picture to a user.
[180]
The second type is supplementary input of information and covering for area instructions.
[181]
Supplementary input of information and covering for area instructions refer to that after a second device obtains text information on the first device with a light-sensing screen, according to instructions input by the user on the second device, the second device may implement processing instructions such as supplementary inputting, highlighting, deleting, copying, and storing the information on the first device with a light-sensing screen.
[182]
The specific steps include following steps.
[183]
In step 1: A second device or a non-transparent shielding object is placed above a first device with a light-sensing screen, and the first device with a light-sensing screen recognizes a relative physical position of an area of the screen of the first device at which the screen is blocked.
[184]
In step 2: On a premise that the first device with a light-sensing screen establishes a secure connection with the second device, the first device with a light-sensing screen sends display information of an overlapping area of the screen of the first device to the second device.
[185]
In step 3: The second device prompts instructions of processing for information such as edition, deletion, highlighting, storage, and copy to a user.
[186]
In step 4: The second device processes the instructions of the user on display information of the first device with a light-sensing screen.
[187]
The third type is secure encryption for device information.
[188]
Secure encryption for information refers to that on the screen of a first device with a light-sensing screen, the user blocks content which is expected to be securely encrypted. Thereby intelligent hiding or locking on the content of the shielded area is implemented. On the other hand, the first device with a light-sensing screen is shielded by different devices, so that after user accounts of the different devices successfully match an account on the first device with a light-sensing screen, secure login of different accounts is implemented.
[189]
The specific steps include the following steps:
[190]
In step 1: A second device or a non-transparent shielding object is placed above a first device with a light-sensing screen, and the first device with a light-sensing screen recognizes a relative physical position of an area of the screen of the first device at which the screen is blocked.
[191]
In step 2: The first device with a light-sensing screen performs intelligent encryption, hiding, or locking on content of a shielded area.
[192]
In this step 2', by matching information attributions of the shielded area with information stored in the second device, different account identities are recognized.
[193]
Several specific embodiments are listed below.
[194]
Embodiment 1: directly shielding by an intelligent device is performed.
[195]
FIG. 10 is a schematic diagram of an example 1 in which a first device with a light-sensing screen is blocked according to an embodiment of the present application. As shown in the figure, a second device used by a user is a smart watch with a screen. The first device with a light-sensing screen is directly shielded, and obtained screen information is displayed on the smart watch.
[196]
FIG. 11 is a schematic diagram of an example 2 in which a first device with a light-sensing screen is shielded according to an embodiment of the present application. As shown in the figure, a second device used by a user is a mobile terminal with a screen, and the first device with a light-sensing screen is directly shielded, and obtained screen information is displayed on the mobile terminal.
[197]
FIG. 12 is a schematic diagram of an example 2 in which a first device with a light-sensing screen is shielded according to an embodiment of the present application. As shown in the figure, second devices used by a user are a smart watch, a mobile terminal, and an intelligent tablet device which are all equipped with a screen. And the first device with a light-sensing screen is a device with a large electronic screen. The first device is directly shielded, and obtained screen information is displayed on the smart watch, the mobile terminal, and the intelligent tablet device.
[198]
Embodiment 2: blocking is performed by a non-transparent shielding object of any shape.
[199]
A user blocks a first device with a light-sensing screen using any non-transparent shielding object around. Obtained information is indirectly displayed on the second device, which is an intelligent device. As shown in FIG. 13, FIG. 14, and FIG. 15, FIG. 13 is a schematic diagram of an example in which a first device with a light-sensing screen is shielded by a paper card and screen information is displayed on an intelligent device according to an embodiment of the present application. FIG. 14 is a schematic diagram of an example in which a first device with a light-sensing screen is shielded by a pen and screen information is displayed on a smart watch according to an embodiment of the present application. FIG. 15 is a schematic diagram of an example in which a first device with a light-sensing screen is shielded by a hand and screen information is displayed on an intelligent device according to an embodiment of the present application.
[200]
Embodiment 3: an embodiment of recognition for content and an operation effect after the recognition is described.
[201]
For example, a smartphone is used as a second device. FIG. 16 is a schematic diagram of an operation entrance of the smartphone according to an embodiment of the present application.
[202]
Embodiment 4: an embodiment of intercepting and processing for text information or pictures is described.
[203]
A existing first device with a light-sensing screen located in a secure environment and a second device located in a secure environment are specifically implemented.
[204]
In step 1: the first device with a light-sensing screen and the second device are paired and connected. A secure connection between the first device with a light-sensing screen and the second device is established. Wherein, the first device with a light-sensing screen allows the second device to obtain information of the first device with a light-sensing screen.
[205]
In step 2: the second device or a non-transparent shielding object is displayed above the first device with a light-sensing screen in an overlapping manner. In this step, a user places the second device or the non-transparent shielding object above the first device with a light-sensing screen in an overlapping manner.
[206]
In step p 3: When text information is recognized, the user may move the first device with a light-sensing screen or the non-transparent shielding object. The text information on the first device with a light-sensing screen is read one by one. When interception for picture is performed, the second device obtains a picture on the shielded screen of the first device with a light-sensing screen.
[207]
In step 4: The second device pops up a confirmation interface. The user finally confirms an operation of obtaining the information. The obtained information is stored or run.
[208]
In step 5: the information is viewed. Because the first device with a light-sensing screen is relatively large, and it is inconvenient to move and carry the first device with a light-sensing screen, when the first device with a light-sensing screen is far away from the scene of the second device, the user only needs to carry the second device, and directly view intercepted information from a screen of the second device.
[209]
Example 1
[210]
As shown in FIG. 17, FIG. 17 is a schematic diagram of a process of an example in which text information is intercepted according to an embodiment of the present application. In this example, a second device that recognizes a tracking number is a smart watch. A first device with a light-sensing screen is a smartphone. Information of a tracking number is displayed on the smartphone of a user. A paired smart watch is placed on an area of the screen of the smartphone in an overlapping manner, where the tracking number is displayed. For a text area that cannot be covered by the smart watch, the user may recognize text information one character by one character by shifting the smart watch. The information is finally integrated and displayed on the smart watch. The user may complete the behavior of fetching a express by carrying only the smart watch without carrying the smartphone.
[211]
Example 2
[212]
As shown in FIG. 18. FIG. 18 is a schematic diagram of a process of an example in which a picture is recognized and intercepted according to an embodiment of the present application. In the figure, a second device is a smartphone, and a first device with a light-sensing screen is a smart tablet computer. The smart tablet computer has a relatively large screen and is not easy to carry. A user may place the smartphone above the screen of the smart tablet. The smartphone is moved to locate a screen area that needs to be intercepted. And the smart tablet computer recognizes a screen area of the smartphone above. After the user touches the smart tablet computer by a back side of the smartphone, picture areas of a photo or an advertisement needed by the user are directly intercepted. The pictures are transferred to the smartphone and stored in the smartphone. The user only needs to carry the smartphone to memorize and view the picture.
[213]
Example 3
[214]
As shown in FIG. 19, FIG. 19 is a schematic diagram of a process of an example in which address information is recognized according to an embodiment of the present application. In the figure, a second device is an intelligent mobile terminal. The first device with a light-sensing screen is a television display screen of a television. Public text information is displayed in an advertisement area of the display screen of the television. A user holds the second device, that is, the intelligent mobile terminal, and may quickly obtain contact information by tapping an area in which an address is displayed on the display screen of the television. And a specific position may be searched in a map of the intelligent mobile terminal. Thereby, the objective of quickly viewing is achieved.
[215]
Example 4
[216]
As shown in FIG. 20, FIG. 20 is a schematic diagram of a process of an example in which a QR code is recognized according to an embodiment of the present application. In the figure, a second device is an intelligent terminal device, and a first device with a light-sensing screen is a smartphone or a tablet computer. The information of a QR code is displayed on the screen of the smartphone or the tablet computer. The second device is used to scan the area in which the QR code is located. The information of the QR code may be read and a related link according to the QR code may be linked to. For example, a page for account transfer and payment may be directly opened, so that the efficiency of payment is improved.
[217]
Example 5
[218]
As shown in FIG. 21, FIG. 21 is a schematic diagram of a process of an example in which information of website address is recognized according to an embodiment of the present application. In the figure, a second device is a smartphone, and a first device with a light-sensing screen is a computer monitor. Information of text, pictures, and address of website is displayed on a screen of the computer monitor. A user holds the smartphone to cover the information of a website address on the screen of the computer monitor. The smartphone may obtain the information of the website address. The website may be directly opened in the smartphone or stored as text.
[219]
Example 6
[220]
As shown in FIG. 22. FIG. 22 is a schematic diagram of a process of an example in which information of telephone number is recognized according to an embodiment of the present application. In the figure, a second device is a smartphone, and a first device with a light-sensing screen is a computer monitor. The content including a telephone number is displayed on the screen of the computer monitor. The user holds the second device to cover or scan an area on which the telephone number is displayed of the screen of the computer monitor. The second device obtains the telephone number in the covered or scanned area. And the second device prompts the user whether to save it as a telephone number or whether to make a call.
[221]
Embodiment 5: an embodiment of supplementary input and instructions of text information is performed.
[222]
In this embodiment, a user touches different information areas in a first device with a light-sensing screen by a second device. And the second device prompts different information to implement different functions of information processing. The specific process for implementation includes the following steps.
[223]
In step 1: a first device with a light-sensing screen and a second device are paired and connected. A secure connection between the first device with a light-sensing screen and the second device is established. The first device with a light-sensing screen allows the second device to obtain and process information of the first device with a light-sensing screen.
[224]
In step 2: the second device or a non-transparent shielding object is placed above the first device with a light-sensing screen in an overlapping manner. In this step, a user places the second device or the non-transparent shielding object above the first device with a light-sensing screen in an overlapping manner.
[225]
In step 3: When text information is recognized, the user may move the first device with a light-sensing screen or the non-transparent shielding object. The text information on the first device with a light-sensing screen is red one e by one. When picture is intercepted, the second device obtains a picture on the shielded screen of the first device with a light-sensing screen.
[226]
In step 4: The user confirms an operation of obtaining the information.
[227]
In step 5: The second device obtains the information on the first device with a light-sensing screen. The second device pops up an operation instruction interface of operation instruction. The user performs a corresponding instruction operation of editing the information, to implement modification on the information on the first device with a light-sensing screen. The modified information may be stored in the first device with a light-sensing screen.
[228]
This method can solve the problem that the input is inconvenient because the first device with a light-sensing screen is relatively large. The user may quickly process the information of the first device with a light-sensing screen through the second device.
[229]
Example 1
[230]
As shown in FIG. 23. FIG. 23 is a schematic diagram of a process of an example in which a cursor is recognized and text is edited according to an embodiment of the present application. As shown in the figure, a first device with a light-sensing screen is an intelligent device with a large electronic screen. The screen of the first device with a photosensitive screen has a relatively large area and is inconvenient to be input. A second device is a smartphone held by a user. After the large electronic screen is lightly scanned by the smartphone, partial text information on the large electronic screen is obtained. And the position of the cursor is recognized. And then information on the large electronic screen may be quickly input and edited through the smartphone. Editing also includes deleting text. Specifically, the cursor on the large electronic screen is covered by the smartphone. The information of whether the text needs to be quickly inserted is displayed on the smartphone. The user inputs text on the smartphone, and the input text is inserted into the screen of the first device with a light-sensing screen.
[231]
Example 2
[232]
As shown in FIG. 24. FIG. 24 is a schematic diagram of a process of an example in which a highlight area in text is edited according to an embodiment of the present application. As shown in the figure, a first device with a light-sensing screen is an intelligent device with a large electronic screen. The screen of the first device with a light-sensing screen has a relatively large area and is inconvenient to be input. A second device is a smartphone held by a user. After the smartphone slightly scans the large electronic screen, the area covered by the smartphone is automatically highlighted.
[233]
Example 3
[234]
As shown in FIG. 25. FIG. 25 is a schematic diagram of a process of an example in which information is copied and pasted according to an embodiment of the present application. As shown in the figure, a first device with a light-sensing screen is an intelligent device with a large electronic screen. The screen of the first device with a light-sensing screen has a relatively large area and is inconvenient to be input. A second device is a smartphone held by a user. The large electronic screen is slightly scanned by the smartphone, and the area covered by the screen of the smartphone is locked as an area of picture will be copied. When the smartphone shields other areas of the large electronic screen or other devices again, the copied picture area for copying picture may be selected for pasting.
[235]
Example 4
[236]
As shown in FIG. 26. FIG. 26 is a schematic diagram of a process of an example in which area information is deleted according to an embodiment of the present application. As shown in the figure, a first device with a light-sensing screen is an intelligent device with a large electronic screen. The screen of the first device with a light-sensing screen has a relatively large area and is inconvenient to be input. A second device is a smartphone held by a user. The user holds the smartphone and slightly scans the large electronic screen by the smartphone. The area covered by the mobile phone screen is locked as an area of picture will be deleted. The user deletes the content of the area after confirmation.
[237]
Embodiment 6: method for encryption and interaction is implemented for information security
[238]
A user touches a screen area of a first device with a light-sensing screen by a second device or a non-transparent shielding object. Information of the screen area is locked or hidden. The first device with a light-sensing screen recognizes an information attribute of the second device, and runs login for an account.
[239]
The specific implementation of the encryption process includes following steps.
[240]
In step 1: a non-transparent shielding object is placed on a screen of the first device with a light-sensing screen.
[241]
In step 2: A user moves the non-transparent shielding object, to shield information that the user expects to encrypt.
[242]
In step 3: The user confirms an operation of encryption or locking.
[243]
In step 4: partial information of the first device with a light-sensing screen is hidden or locked.
[244]
The specific implementation of secure login for account includes following steps.
[245]
In step 1: A first device with a light-sensing screen records different account identities. A plurality of different devices has established secure connections with the first device with a light-sensing screen.
[246]
In step 2: A second device which is securely connected is placed on the first device with a light-sensing screen in an overlapping manner.
[247]
In step 3: A user may move the second device or a non-transparent shielding object to match an area for displaying password displayed on the first device with a light-sensing screen with information of password input on the second device.
[248]
In step 4: The user confirms that user account has been matched successfully.
[249]
In step 5: logging in the first device with a light-sensing screen with a personal secure account is successfully implemented.
[250]
Example 1
[251]
As shown in FIG. 27. FIG. 27 is a schematic diagram of a process of an example of secure screen projection according to an embodiment of the present application. In the figure, a first device with a light-sensing screen is a mobile phone. A user projects content of the mobile phone onto a screen of a television screen. Notification message of personal mobile phone of the user is personal privacy and does not want to be projected. At this time, the user shields an area of notification message of the mobile phone, and the area will not be projected.
[252]
Example 2
[253]
As shown in FIG. 28. FIG. 28 is a schematic diagram of a process of an example in which private content is encrypted according to an embodiment of the present application. In the figure, a first device with a light-sensing screen is a mobile phone. A user may directly shield an information area that the user wishes to be shielded, so that the content of the shielded area is hidden or blurred. Encryption is implemented, so that the content is not seen by other people.
[254]
Example 3
[255]
As shown in FIG. 29, FIG. 29 is a schematic diagram of a process of an example of securely logging in to a personal account according to an embodiment of the present application. In the figure, a first device with a light-sensing screen is a television, showing the existing login entrances of different account. Different users may use their own mobile phone devices to shield their respective areas of login entrance of account. When a password input by a user on a personal mobile phone device is consistent with that recorded in the television, secure login is implemented.
[256]
The method, system and apparatus provided in the embodiments of the present application implement particularly natural information transmission and information processing between large screen and small screen. Information includes text recognition and picture interception, processing for text information and picture information, secure encryption for text information or picture information, secure login for account, and the like. An interaction manner of information encryption is a interaction implemented by particularly natural touch of device. The user only needs to lightly place a device convenient to move and carry on a screen of another device inconvenient to move, so that the user may quickly obtain partial fragmented information desired by the user. Information of the large screen may be processed by the small screen. Secure encryption of information is implemented. Quick response is implemented for information between the two devices. Great convenience has been brought to people's life.
[257]
The foregoing preferred embodiments further describe, in detail, the objectives, technical solutions, and advantages of the present application. It should be understood that the foregoing descriptions are merely preferred embodiments of the present application, and are not intended to limit the present application. Any modification, equivalent replacement, improvement, and the like made within the spirit and principle of the present application should fall within the protection scope of the present application.

Claims

[Claim 1]
An interaction method for information, comprising: recognizing, by a first device with a light-sensing screen, a non-transparent shielding object or a second device; and transmitting, by the first device with a light-sensing screen, shielded screen information to the second device for processing.
[Claim 2]
The method according to claim 1, further comprising: establishing, by the first device with a light-sensing screen, a communication connection with the second device before recognizing the non-transparent shielding object or the second device.
[Claim 3]
The method according to claim 1, wherein, the screen information is data, text, or a picture; and the method further comprises: intercepting a covered picture, and then performing splicing processing on the covered picture by the second device when the screen information is a picture.
[Claim 4]
The method according to claim 2, wherein, processing performed by the second device comprises: performing addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and sending the processed screen information to the first device with a light-sensing screen for storage or/and for displaying in the shielded area.
[Claim 5]
The method according to claim 2, further comprising: not displaying the screen information or displaying the screen information in a fuzzy manner in a shielded area by the first device with a light-sensing screen after the screen information is encrypted by the second device.
[Claim 6]
The method according to claim 2, wherein, the screen information is a secure login interface, and processing performed by the second device comprises: filling a secure account in the secure login interface, and sending the secure account to the first device with a light-sensing screen for verification and displaying in the shielded area.
[Claim 7]
An interaction system for information, the system comprises a first device with a light-sensing screen and a second device, wherein the first device with a light-sensing screen is configured to: recognize a non-transparent shielding object or the second device, and transmit shielded screen information to the second device; and the second device is configured to process the screen information.
[Claim 8]
The system according to claim 7, wherein, the first device with a light-sensing screen is further configured to establish a connection with the second device.
[Claim 9]
The system according to claim 8, wherein, the second device is further configured to: perform addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and then send the processed screen information to the first device with a light-sensing screen for storage or/and for displaying in a shielded area; and the first device with a light-sensing screen is further configured to: after receiving the processed screen information, store or/and display the processed information in the shielded area.
[Claim 10]
The system according to claim 7, wherein, the screen information is data, text, or a picture; and when the screen information is a picture, the second device is further configured to: intercept a covered picture, and then perform splicing processing on the covered picture.
[Claim 11]
The system according to claim 8, wherein, the second device is further configured to: perform encryption processing on the screen information, and send the processed information to the first device with a light-sensing screen; and the first device with a light-sensing screen is further configured to: not display the screen information or display the screen information in a fuzzy manner in a shielded area.
[Claim 12]
The system according to claim 8, wherein, the screen information is a secure login interface; the second device is further configured to: fill a secure account in the secure login interface, and then send the secure account to the first device with a light-sensing screen for verification and displaying in a shielded area; and the first device with a light-sensing screen is further configured to: verify the received secure account and display the account in the shielded area.
[Claim 13]
An interaction device for information, comprising: an recognition module and a transmission module, wherein the recognition module is configured to recognize a non-transparent shielding object or a second device; and the transmission module is configured to transmit shielded screen information to the second device.
[Claim 14]
The device according to claim 13, further comprising a connection module, configured to establish a connection with the second device.
[Claim 15]
The device according to claim 14, wherein, the transmission module is further configured to: receive processed screen information on which addition, insertion, deletion, or an operation according to a pre-set operation instruction are performed by the second device, and store the processed information or/and display the processed information by a display module in a shielded area.
[Claim 16]
The device according to claim 14, wherein, the transmission module is further configured to receive encrypted screen information sent by the second device, the device further comprise a display module; the display module is configured to not display the screen information or display the screen information in a fuzzy manner in a shielded area.
[Claim 17]
The device according to claim 14, wherein, the transmission module is further configured to receive a secure account sent by the second device, and display the secure account in a shielded area for verification, and the screen information of the shielded area is about a secure login interface.
[Claim 18]
An interaction device for information, comprising: a secondary display module and a processing module, wherein, the secondary display module is configured to display shielded screen information sent by a first device with a light-sensing screen; and the processing module is configured to process the shielded screen information sent by the first device with a light-sensing screen.
[Claim 19]
The device according to claim 18, wherein, the screen information is a picture, text, or data; and when the screen information is a picture, the device further comprises a scanning module, configured to: perform mobile scanning on a screen shielding object of the first device with a light-sensing screen, and perform splicing processing on scanned information of the shielding object, to obtain screen information.
[Claim 20]
The device according to claim 18, further comprising a connection module, configured to establish a connection with the first device with a light-sensing screen.
[Claim 21]
The device according to claim 20, wherein, the processing module is further configured to: perform addition, insertion, deletion, or an operation according to a pre-set operation instruction on the screen information, and then send the processed screen information to the first device with a light-sensing screen; or the processing module is further configured to: perform encryption processing on the screen information, and send the processed screen information to the first device with a light-sensing screen; or when the screen information is a secure login interface, the processing module is further configured to: fill a secure account in the secure login interface, and then send the secure account to the first device with a light-sensing screen.

Drawings

[ Fig. 1]

[ Fig. 2]

[ Fig. 3]

[ Fig. 4]

[ Fig. 5]

[ Fig. 6]

[ Fig. 7]

[ Fig. 8]

[ Fig. 9]

[ Fig. 10]

[ Fig. 11]

[ Fig. 12]

[ Fig. 13]

[ Fig. 14]

[ Fig. 15]

[ Fig. 16]

[ Fig. 17]

[ Fig. 18]

[ Fig. 19]

[ Fig. 20]

[ Fig. 21]

[ Fig. 22]

[ Fig. 23]

[ Fig. 24]

[ Fig. 25]

[ Fig. 26]

[ Fig. 27]

[ Fig. 28]

[ Fig. 29]