Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2018219884) MICROPROCESSOR INCLUDING A MODEL OF AN ENTERPRISE
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

IN THE CLAIMS:

1. A domain module computation unit 100 characterized by a single board computer;

a central processing unit (CPU) 101 in communication with both a first bus 110 and with a second bus 111, wherein all communication between the first bus 110 and the second bus 111 is through the CPU 101;

the first bus 110 in communication with a plurality of internal modules, said internal modules including:

a kernel non- volatile memory 102;

a working non- volatile memory 106;

a random access memory 108; and

an encryption / decryption unit 109; and

the second bus 111 in communication with an input / output (I/O) unit 113 effective to communicate with devices external to the single board computer.

2. The domain module computation unit 100 of claim 1 characterized in that wherein the working non- volatile memory 106 includes one or more domain models 107, each domain model 107 including a plurality of nodes 10, 16 linked together by one or more paths all converging on one or more goals wherein the goals of each domain model 107 are different.

3. The domain module computation unit 100 of claim 2 characterized in that the links 12, 18 between nodes 10, 16 contain cause and effect logic 14.

4. The domain module computation unit 100 of claim 3 characterized in that the kernel non- volatile memory 102 includes a microkernel 104 and an operating system 103 effective to communicate 105 with and instruct a domain model 107.

5. The domain module computation unit 100 of claim 4 characterized in that the I/O unit 113 communicates with an external server.

6. The domain module computation unit 100 of claim 5 characterized in that the external server is effective to send upgrade / update instructions to the model kernel 104 and to at least one domain model 107.

7. The domain module computation unit 100 of claim 6 characterized in that the upgrade / update instructions are received in encrypted format and directed to the encryption / decryption unit 109 for decryption.

8. The domain module 100 computation unit of claim 5 characterized in that the random access memory 108 is effective to support the one or more domain modules 107 in the event communication with devices external to the single board computer is lost.

9. A system containing a model of an enterprise 198, characterized by: a plurality of domain computation units 201 each having as a single board computer 100, a central processing unit (CPU) 101 in communication with both a first bus 110 and with a second bus 111, wherein all communication between the first bus and the second bus is through the CPU 101, the first bus 110 in communication with a plurality of internal modules, said internal modules including: (a) a kernel non- volatile memory 102, (b) a working non- volatile memory 106, (c) a random access memory 108 and (d) an encryption / decryption unit 109, and the second bus 111 in communication with an input / output (I/O) unit 113 effective to communicate with devices external to the single board computer; and

a third bus 202 enabling the plurality of domain computation units 201 to communicate with each other.

10. The system of claim 9 characterized in that the model of the enterprise 198 is split into a plurality of domain models that are distributed among the plurality of domain computation units 201.

11. The system of claim 10 characterized in that each one of said domain models includes a plurality of nodes Al - A6 linked together by one or more paths all converging on one or more goals G.

12. The system of claim 11 characterized in that the third bus 202 communicates 203 with an external server.

13. The system of claim 12 characterized in that the external server sends encrypted instructions to one or more of the domain computation units 201 and those instructions are decrypted in that domain computation unit encryption / decryption unit 109.

14. The system of claim 13 characterized in that the random access memory 108 of each domain computation unit 201 is effective to support the one or more domain modules 107 in the domain computation unit 201 in the event communication 203 with the server or with others of the domain computation units 201 is lost.

15. The system of claim 13 characterized in that each single board computer 100 is contained within a different device.

16. The system of claim 15 characterized in that the different devices are selected from the group consisting of smart phones, tablets and similar computational devices.

17. The system of claim 16 characterized in that, in the event of an emergency, the server sends instructions to update the function of select domain computation units 201 and coordinates the function of each device.

18. The system of claim 15 characterized in that the same domain model 107 is located on different domain model computation units 100 to provide redundancy.

19. The system of claim 15 characterized in that the third bus 202 is a wireless communication network.

20. A single board computational unit 100 for executing software code modeled in a form embedding data and software instructions in a single model, the single board computational unit characterized by:

a central processing unit 101 configured to process data according to an at least one layer of abstraction 210 model, said model including nodes 215 associated with context 217, each node 215 being connected to at least one other node at a same layer of abstraction, wherein the nodes represent status and state of processes;

a kernel non- volatile memory 102 for storing and limiting access to the central processing unit 101, said kernel 102 having instructions for interpreting the at least one layer of abstraction 210 model and instructions for synchronizing the at least one layer of abstraction model 210 with a version of the model stored at a remote device;

an encryption and decryption unit 109 for encrypting and decrypting data exchanged between the single board computational unit 100 and the remote device; and

a power management unit 116 configured to inform the central processing unit 101 of power status.

21. The single board computational unit 100 of claim 20 characterized in that the power management unit 116 dynamically implements power management priorities, such that these priorities are a safety mechanism for preserving battery capacity or, in cases of battery capacity below a threshold ensuring that the computational unit will shut-down in a consistent and safe manner, and where the power management priorities are implemented according to at least one of:

a remaining available energy of the computational unit;

an amount of processing power and work needed for an operation or series of operations; and

an enterprise priority of operations of processing by the processor 101.

22. The single board computational unit 100 of claim 21 characterized in that a

dynamic processing performance management facility manages the run time system

prioritization dynamically to provide optimal system processing performance, wherein the dynamic processing performance management facility maintains system processing performance especially in emergencies or high enterprise risk use according to at least one of:

an available processing power of the processor 101;

an amount of processing power and work needed for an operation or series of operations; and

an enterprise priority of operations of the processing by the processor 101.

23. The single board computational unit 100 of claim 22 characterized in that the enterprise priority of operations is determined by goal proximity G of each prospective processing cluster 220, 230 to goals based on the enterprise situation at hand.

24. The single board computational unit 100 of claim 22 characterized in that the enterprise priority of operations is determined by an expected processing capacity that will be expended on a prospective node cluster 220, 230 to be processed.

25. The single board computational unit 100 of claim 21 characterized in that the single model of that single board computational unit 100 forms a component of an enterprise model formed by interconnecting the grid of interconnected single board computational units, said enterprise model being configured as an at least one layer of abstraction model 210.

26. The single board computational unit 100 of claim 22 characterized in that the kernel non-volatile memory 102 includes implementations of nodes 10, 16, links 12,

18 and propagation models 14 in central processing unit 101 instructions and where the at least one layer of abstraction model 210 is configured to call these

implementations for executing the enterprise model.

27. The single board computational unit 100 of claim 26 characterized in that the kernel non- volatile memory stores 102 a plurality of operating systems with each operating system is associated with a different central processing unit 101.

28. The single board computational unit 100 of claim 26 characterized in that the working non- volatile memory 106 stores a plurality of multi-layer abstraction partial models 210 and the plurality of partial models are capable of being executed in parallel.

29. The single board computational unit 100 of claim 25 characterized in that the central processing unit 100 selects the sequence of nodes 225 on a layer 220 of the at least one layer of abstraction model 210 to executed first based on a prioritization scheme.

30. The single board computational unit 100 of claim 26 characterized in that the dynamic processing prioritization is based on at least one of:

an available processing power of the processor 101;

an amount of processing power and work needed for an operation or series of operations; and

an enterprise priority of operations of the processing by the processor 101.

31. The single board computational unit 100 of claim 30 characterized in that an enterprise priority of operations of processing is determined by goal proximity of each prospective processing cluster Al - A6 to the goals G based on the enterprise situation at hand.

32. The single board computational unit 100 of claim 30 characterized in that an enterprise priority of operations of processing is determined by an expected processing capacity that will be expended on the prospective node cluster Al - A6 to be processed.

33. A grid 198 of computational units 201 configured to execute software code modeled in a form embedding data and software instructions in a single model, the grid 198 of computational units 201 characterized by a plurality of computational units, where each computational unit is characterized by:

a central processing unit 101 for processing data according to an at least one layer of abstraction model 210, the at least one layer of abstraction model 210 having nodes 215 with status and state parameters, where each node 215 is connected to at least one additional node at the same layer of abstraction 210, the nodes 215 configured to propagate to high level goals G with status and state parameters on the same layer of abstraction 210, nodes 215 in the same layer of abstraction associated with nodes 225, 238 with process status and state parameters on a lower level of abstraction 220, 230 or associated with nodes 215 defined by context at a higher level of abstraction 210;

a kernel non- volatile memory 102 for storing and limiting access only to the central processing unit 101, instructions configured to interpret the at least one layer of abstraction 210 model, and instructions configured to synchronize the at least one layer of abstraction 210 model with an enterprise model stored on a remote device;

a working non- volatile memory 106 for storing the at least one layer of abstraction model 210;

a random access memory 108 for storing data and instructions at runtime; an encryption and decryption unit 109 for encrypting and decrypting data exchanged with external computational units;

an input-output unit 113 for exchanging data with external computational units; and

a power management unit 116 for managing power and for informing the central processing unit 101 of power status.

34. The grid 198 of computational units 201 of claim 33 characterized in that the kernel 104 stored in the kernel non- volatile memory 102 includes implementations of nodes 10, 16 links 12, 18, propagation models 14 and central processing unit 101 instructions, wherein the at least one layer abstraction model 210 calls these implementations for executing the model.

35. The grid 198 of computational units 201 of claim 34 characterized in that the kernel non-volatile memory 102 stores a plurality of operating systems 103, each said operating system 103 is associated with a different central processing unit 101.

36. The grid 198 of computational units 201 of claim 34 characterized in that the working non- volatile memory 106 stores a plurality of the at least one layer of abstraction 210 partial models and where the plurality of partial models are executed in parallel.

37. A method to provide propagation traceability in a model stored in one or multiple databases, characterized by:

providing a plurality of first nodes 10, each said first node 10 having a respective attribute and a respective parameter state; and

providing a plurality of first links 12, 18 interconnecting said plurality of first nodes 10 in a source to target relationship to form a first node cluster, each said first link 12, 18 containing software code 14 effective to change the respective attribute of a target node 16.

38. The method of claim 37 characterized in that the plurality of first nodes 10 retrieve an initial respective attribute from a location designated by the node 10 and its state directly without any other location instructions to retrieve said initial respective attributes.

39. The method of claim 37 characterized in that the links 12, 18 are unidirectional and both the first node 10 attribute and the first node 10 state parameters are part of the model, thereby providing propagation traceability, and are configured to be displayed at a user interface.

40. The method of claim 39 characterized in that first nodes 10 that have a change in either attribute or state parameters are displayed differently as propagating nodes 10 and logical links 12, 18 displayed at user interface.

41. The method of claim 38 characterized in that select ones of the first nodes Al, A2, A3. A4 define a first path to a goal G.

42. The method of claim 41 characterized in that other select ones of the first nodes Al, A5 define a second path to the goal G.

43. The method of claim 42 characterized in that goal proximity of a node cluster Al, A2, A3, A4 starting from a first node A4 is dependent on a number of stages of propagation to the goal G and a comparative effect of propagation of said node Al, A2, A3, A4 to other nodes Al, A5 propagating to the same goal G.

44. The method of claim 43 characterized in that the first links 12, 18 contain cause and effect software code 14.

45. The method of claim 43 characterized in that providing a plurality of second nodes Al, A5 are interconnected by a plurality of second links to form a second node cluster wherein the second node cluster is interconnected to the first node cluster by at least one third link.

46. A non-transitory computer program product that causes a computational unit to execute software code modeled in a form embedding data and software instructions in a single model, the non-transitory computer program product configured to:

cause a central processing unit 101 to process data according to an at least one layer of abstraction model 200, which model includes nodes 215 associated with context 217, wherein each node 215 is connected to at least one additional node at the same layer of abstraction 200 or to other layers of abstraction 220, 230, which nodes 215 represent process status and state, and which nodes 215 are used to associate high level goals with a plurality of levels of process status and state, and where the at least one layer of abstraction model 200 is partitioned into at least a first partial-model 198 stored at a first computational unit and a second partial- model 198 stored at a second computational unit or server;

cause a kernel non- volatile memory 102 to give access to the central processing unit lOlof the kernel's content 104, where the kernel's content 104 includes an operating system 103, instructions for interpreting the multi-layer abstraction model, and instructions for synchronizing the multi-layer abstraction model with a version of the model stored at a server or at a third computational unit;

cause a working non- volatile memory 106 to store the multi-layer abstraction model 200;

cause a random access memory 108 to store data and instructions at runtime;

cause an encryption and decryption unit 109 to encrypt and decrypt data exchanged 203 with external computational units;

cause an input-output unit 113 to exchange data with external

computational units; and

cause a power management unit 116 to manage power and to inform the central processing unit 101 of power status.

The non-transitory computer program product of claim 46 characterized in that the kernel non- volatile memory 102 includes implementations of nodes 10, 16, links 12, 18, and propagation models 14 in the central processing unit 101 instructions, and where the at least one layer of abstraction model 200 calls these implementations for executing the model 198.

The non-transitory computer program product of claim 46 characterized in that the kernel non- volatile memory 102 stores a plurality of operating systems 103 and each operating system 103 is associated with a different central processing unit 101.

49. The non-transitory computer program product of claim 46 characterized in that the working non- volatile memory 106 stores a plurality of the at least one layer of abstraction partial models 200 and the plurality of partial models are executed in parallel.

50. The non-transitory computer program product of claim 46 characterized in that the power management unit 116 dynamically implements power management priorities that function as a safety mechanism for preserving battery capacity or in cases of battery capacity below a threshold ensuring that the computational unit will shut-down in a consistent and safe manner, and where the power management priorities are implemented according to at least one of:

a remaining available energy of the computational unit;

an amount of processing power and work needed for an operation or series of operations; and

an enterprise priority of operations of processing by the processor 101.

51. The non-transitory computer program product of claim 46 characterized in that dynamic processing performance management facility manages the run time system prioritization dynamically to provide optimal system processing performance, the dynamic processing performance management facility maintains system processing performance especially in emergencies or high enterprise risk use by prioritizing at least one:

an available processing power of the processor 101;

an amount of processing power and work needed for an operation or series of operations; and

an enterprise priority of operations of the processing by the processor 101.

The non-transitory computer program of claim 50 characterized in that the enterprise priority of operations of processing by the processor 101 is determined by the goal proximity of each prospective processing cluster Al, A2, A3, A4 to the goals G based on the enterprise situation at hand.

53. The non-transitory computer program of 51 characterized in that the enterprise priority of operations of processing by the processor 101 is determined by the expected processing capacity that will be expended on the prospective node cluster Al, A2, A3, A4 to be processed.

54. A more than one level of abstraction system 200 characterized by:

a plurality of first nodes 238, each said first node 238 having a respective attribute and a respective state, and a plurality of first links interconnecting said plurality of first nodes 238 in a source to target relationship to form a first node cluster, each said first link containing software code effective to change the respective attribute of a target node;

a plurality of second nodes 250, each said second node 250 having a respective attribute and a respective state, and a plurality of second links interconnecting said plurality of second nodes in a source to target relationship to form a second node cluster, each said second link containing software code effective to change the respective attribute of a target node; and

one or more third links interconnecting the first node cluster 238 and the second node cluster 250 either in a source target relationship or a direct association of similar relationship wherein the first node cluster 238 is at a higher level of abstraction 230 when compared to the second node cluster 250.

55. The multi-level system 200 of claim 54 characterized in that node clusters 238 250 are goal oriented and process oriented and executable.

56. The multi-level system 200 of claim 55 characterized in that at least one node 250 of the second node cluster is interconnected to a first predictive pattern module 258 that associates that node 250 with a first external application.

57. The multi-level system 200 of claim 56 characterized in that the first external application 258 is located on a device selected from the group consisting of a desktop computing device, a portable computing device, an Internet of Things device and a smart phone.

58. The multi-level system 200 of claim 56 characterized in that a third node cluster 260 is process oriented and at the same level of abstraction as the second node cluster 250.

59. The multi-level system 200 of claim 58 characterized in that at least one node 265 of the third node cluster 260 is interconnected to a second predictive pattern module

278 that associates that node 265 with a second external application.

60. The multi-level system 200 of claim 59 characterized in that the second external application 278 is located on a device selected from the group consisting of a desktop computing device, a portable computing device, an Internet of Things device and a smart phone.

61. The multi-lever system of claim 50 characterized in that additional node clusters 230 are disposed between the abstraction level of the first node cluster 225 and the abstraction level of the second 250 and third node cluster 260 and have an intermediate level of abstraction.