Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters



[0001] This application relates to computerized systems. More particularly, this application relates to control and management of workflows in computerized systems.


[0002] Existing control framework solutions are more hardware oriented as opposed to software oriented. But control and interoperability among devices exhibit differences that make it difficult to seamlessly operate different workflow solutions. This is because the executing tasks (processes) are not the main scope of the framework of those devices, the device is. Because these solutions focus on the device, the user must take extra steps to allow the solution to interoperate with additional different solutions.

[0003] There are also many vendors and open source tools in the market that offer workflow and process execution management, but they are both focused on specific problem domains (e.g., Data movement, Extract, Transform and Load (ETL), Analytics, etc.). They cannot manage all the phases of an end-to-end workflow. Some workflow platforms also limit the technologies/languages that each executing node/task can run or should be written (e.g., Java, Python, Shell Model) tools. They provide descriptive workflow notation capabilities which are not related to any execution environment in particular. In part, this is very useful when there is a need for simply describing process functions and executing them within the Business Process Model and Notation (BPMN) engine itself. But the scope of BPMN tools is only to manage simple tasks execution,

without managing external process execution along with its complexity and parameterization.

[0004] As a result, there is currently no delegation/synchronization of these engines with external third-party databases, ETL tools, analytical environments, data warehouses, etc. In other words, the workflow representation remains an abstraction of a complex flow that is not truly runnable. Nowadays, this is often overcome by implementing external custom routines (e.g., glue code), or by relying on solutions, which are either limited or prone to fail. Alternative solutions use the approach of coordinating the execution of independent tasks, by for instance, using OS task scheduling. Another alternative is to rely on a single server, node or not scalable as the process pipeline grows. The capability of a workflow to orchestrate local or remote processes is underutilized in the present world and current solutions are not extensible enough to allow any existing platform to be integrated with others, which is on Control Framework purpose.


[0005] A novel system for management of workflows in a computer system includes a control framework. The control framework comprises a workflow scheduler, a workflow dispatcher, a task dispatcher, a task instance dispatcher and a workflow repository, wherein the control framework is configured to manage one or more workflows using command line commands.

[0006] The control framework may further include an agent module configured to receive commands from a third-party tool and format the received commands to be processed by the control framework. The control framework may further include a

command line interface for converting the received commands into a command line format for processing by the control framework. According to some embodiments the workflow scheduler includes a module to identify triggers for starting a process workflow. The triggers may be an internal event such as a time-bounded event, error-bounded event or a task completion event or the trigger may be an external event such as a task-killed event or a workflow paused event. According to some embodiments the workflow scheduler spawns a workflow dispatcher for each workflow to be executed.

[0007] The workflow dispatcher may identify tasks needed to complete a workflow and to spawn a task instance dispatcher for each task of the workflow, wherein each task instance dispatcher begins a computer process via an operating system command interface, the computer process being executed by the operating system. The task instance dispatcher tracks a process identifier (PID) of each computer process associated with the task instance dispatcher.

[0008] The workflow repository comprises a memory storing process metadata, process execution history, logging information and memory states of task instance dispatchers monitoring currently execution computer processes. The process metadata may include task data split information for parallel execution of a task on a plurality of processors. In some embodiments, the process metadata may comprise parameters associated with a task and/or sequencing of one or more tasks.

[0009] According to some embodiments, a representational state transfer (REST) interface is provided to allow interoperability between the control framework and one or more third-party client tools.

[0010] A system for managing workflows of computer executable tasks includes a plurality of control framework instances, each control framework instance includes a workflow scheduler, a workflow dispatcher, a task dispatcher and a task instance dispatcher; and a workflow repository in communication with each of the plurality of control framework instances, wherein each framework instance is configured to manage the computer executable tasks using command line commands. Each control framework is configured to run on an operating system of a computing device that is executing the control framework. In some embodiments, a first control framework is operating on a first operating system, and a second control framework is operating on an operating system different than the first operating system.


[0011] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0012] FIG. 1 is a block diagram of a prior art workflow management system.

[0013] FIG. 2 is a block diagram of a prior art workflow management system.

[0014] FIG. 3 is a block diagram of a prior art workflow management system.

[0015] FIG. 4 is a block diagram of a workflow management system according to aspects of embodiments of the present disclosure.

[0016] FIG. 5 is a block diagram of an architecture of a workflow management system according to aspects of embodiments of the present disclosure.

[0017] FIG. 6 is a diagram of a cluster configuration of workflow management systems according to aspects of embodiments of the present disclosure.

[0018] FIG. 7 is a process flow diagram for a method of workflow management according to aspects of embodiments of the present disclosure.

[0019] FIG. 8 is a block diagram of a computer system that may be used to practice aspects of embodiments of the present disclosure.


[0020] Embodiments of workflow and task management systems described in the following disclosure provide various improvements to computer workflow and process management. By leveraging low level command line interface (CLIs), embodiments as will be further described provide improved interoperability with third party client applications, enables parallel processing through data sharding or partitioning, reduce data latency, provide data and process monitoring and tracing, increase processing speed, support concurrency and is highly scalable.

[0021] With regard to interoperability, third party workflow clients are typically written in programming languages such as C++ or Java. Accordingly, controllers including embedded controllers containing embedded code in assembly language can be accessed by these third-party applications using proper code acting as an adapter. The control framework according to the various embodiments of the present invention can

communicate with any number of different controllers by selecting the proper adapter for that controller device. In this regard, the present invention supports existing or multiple vendor controllers, such as multiple controllers used in an industrial automation infrastructure.

[0022] Embodiments of this disclosure also provide improved parallel processing ability by partitioning or sharding data. Sharding is a strategy a distributed system using for locating its partitioned data. Sharding is often used to support deployment of data sets that require distribution and high throughput operations. This is done through a sharding key definition that is the criteria used to separate data between controllers. The sharding mapping may be stored in a specific server instance or inside each controller. In either case, the sharding information is accessible to all devices. Each sharding key holder device can coordinate the data transferring process with other peers, since the sharding metadata holds the data/controller location mapping. In this way, the distributed data management system (DDMS) is built, which provides decentralized decision making at the controller level.

[0023] Solutions according to aspects of embodiments of the present disclosure utilize a separate processor for data processing of workflow and process management. The data processing takes advantage of cached memory associated with the processor. Cache memory is many times faster than reading data using disk access and provides the following processing capabilities.

• Queries: Queries for data can be issued by any controller, allowing ad-hoc SQL query execution, pre-defined queries and formulas calculation based on controller tags.

• Mapreduce tasks: Mapreduce jobs using MongoDB are JavaScript based and run within distributed database that may contain sharded data. This job distributes tasks among the nodes (controllers) supporting parallel processing in this way. The aggregated results are then returned and saved for further investigation and processing.

[0024] In addition, other processing can also occur on the client side, for example during the aggregation of final results extracted from a string of nodes. By providing all jobs and query results to a client in an intelligible format (e.g. tabular, csv, or image), third party data visualization tools may be freely used on top of workflow management systems according to the present invention.

[0025] This enhanced data processing provides reduced data latency by bringing queries and processing jobs closer to data. This proximity reduces network traffic dramatically, as only the results are transferred through the network and, not the raw data. Transfer of raw data is only necessary when operations over original values from multiple controllers need to be done, as in data correlation analysis. Yet, data will be processed within the industrial network level, where one controller is usually separated from another by a few layer 2 switches and there is little impact on performance.

[0026] Embodiment provide improved data monitoring. Controller context information can be monitored and used in order to obtain deeper analytic insights. This can be done by detecting changes on process behaviors through routines that expose meta-information about the controller logics & responses, which can be used as input to further control logic enhancements. Furthermore, controller misconfigurations are eventually found when the damage is already done. This could be avoided by enabling

anomaly behavior detection based on algorithms that can have access to controller logic and also to remaining process data available on other controllers.

[0027] Concurrency and consistency are achieved by simultaneous data access started by one or more clients. Concurrency is fully supported by the distributed database by multiple-reader, single-writer, and writer-greedy strategies. Embodiments of this disclosure provide means for an unlimited number of simultaneous readers on the distributed database and write operations blocks reading until they are finished, assuring consistency.

[0028] According to aspects of embodiments of the present invention, the control framework works by providing full support for native command line process execution, which enables comprehensive OS process control. On top of that, workflow metadata configurations describe how tasks should be chained and communicate with each other. Three basic components exist with in this framework including a Workflow Scheduler, Dispatchers and a Workflow Repository.

[0029] The novel workflow and process control framework disclosed in this application enable new or existing controllers to join the distributed analytical platform using a simple configuration. In an industrial setting, controllers can be delivered with the distributed database already installed and deployed or it can be done later by a one-click button operation. This action may remotely upload software to that device and also push configuration to the system through scripts execution. It will be also possible to enable or disable tag monitoring individually, allowing storage volume optimization, and other parameterization, including communication timeouts and data thresholds.

[0030] The proposed solution is horizontally scalable because it is applicable to a number of controllers ranging from one up to more than a thousand controllers. In embodiments of this invention, adding nodes to a distributed and sharded database schema is equivalent to adding more data to a common, partitioned table. The newly added data becomes available to other controllers on the network as soon as it is loaded into its own (controller) database.

[0031] The new control framework provides improvements over the state of the art which could only provided limited control and services.

[0032] FIG. 1 is a first prior art workflow scheduler that include a third-party management tool 101. The management tool 101 is in communication with a first third party app 103, a second third party app 105 and a third third-party app 107. The third-party apps are operated as a black box, not providing details of how their processing is performed. These third-party apps 103, 105, 107 are limited to receiving inputs in the form of parameters 109 and providing outputs in the form of logging data 1 1 1. However, the intermediate steps in the processes performed by the third-party apps 103, 105 and 107 remains an unknown.

[0033] As a result, this strategy is generally limited because it does not allow parallelization, full control over command line options, or resource management. Parallelization and resource management in this case are hard to achieve, as there is no underlying framework to support them.

[0034] FIG. 2 is another diagram of a prior art scheduling technique. An external scheduler 201 assigns tasks to one or more third-party apps 203, 205, 207. The external scheduler 201 starts task in a pre-determined order and assigns tasks

independently 209 between the first third-party app 203, the second third-party app 205 and the third third-party app 207. However, in this approach, parameters are configured on the task itself, and are not being controlled by an overarching framework.

[0035] FIG. 3 is an example of yet another prior art technique for task scheduling where an external scheduler 301 starts the first task 309 and assigns the first task 309 to the first third-party app 303. Each subsequent task is started by the prior task, so that once the first task 309 is performed, third-party app 303 spawns the next task 31 1 and assigns the second task 31 1 to the second third-party app 305. The second task 31 1 is performed and generates the next task 313 that is assigned to the third third-party app 307. This continues until all tasks are completed.

[0036] Now referring to FIG. 4, a workflow management architecture 400 according to aspects of embodiments of the present disclosure is shown. The control framework 401 works using workflow dispatchers and task dispatchers (explained in greater detail below) to provide multiple instances of tasks being performed by their associated client applications. The control framework 401 is responsible for workflow management, scheduling and monitoring. In addition, the control framework 401 provides improvements in allowing efficient resource management and traceability and data lineage identification. The control framework 401 manages a number of tasks, such as tasks being performed by third-party apps 403, 405 and 407. Each task may be associated with a particular instance of one of the third-party apps or other computer process. For example, two tasks may be performed by third-party app 1 403, which each task using a separate instance of the first third-party app 403. As shown in FIG. 4, three tasks are performed on three instances of second third-party app 405 and one

instance of third-party app 3 407 is used. The control framework uses two-way communication with each instance 409 to control and manage the workflow and associated tasks. The control framework 401 may provide the task instances with parameters including process configurations, parallelization parameters and/or memory and CPU settings. In addition, control framework 401 may receive information from each task instance including logging information, information relating to the current progress in completing the task and output metadata.

[0037] FIG. 5 is a block diagram depicting an architecture 500 of a control framework according to aspects of the present disclosure. The control framework instance 501 is executed by the operating system 503 of a computing device. The control framework 501 includes a command line interface (CLI) or REST interface 505 that allows the control framework 501 to communicate and work with third-party client tools 507. Commands 51 1 carry information between the third-party client tools 507 and the control framework 501. Commands 51 1 may be configured as command line commands allowing low-level instructions between the control framework 501 and other entities. An agent 509 directs commands 51 1 throughout the control framework 501. The Workflow Scheduler 513 is responsible for determining the frequency and kind of events 515 that should trigger a particular workflow execution. The trigger 515 may either be an internal event (e.g., time bounded, error bounded, task completion bounded, etc.), or an external event (task killed, workflow paused, etc.). The Workflow Scheduler 513 provides returns 516 containing information necessary to implement the workflow. Additionally, workflows can also trigger other workflows. According to embodiments, the Workflow Scheduler 513 may start a Workflow Dispatcher Instance

517 for each workflow that should be executed. The Workflow Dispatcher 517 will manage the workflow and the execution of tasks associated with the workflow via a Task Dispatcher 519. The Task Dispatcher 519 provides sequencing, input data, input metadata (parameters), and standard output collecting (e.g., logs, errors) among other information. Each Workflow Dispatcher 517 runs in its own separate Thread and spans newly managed threads for each task execution (Task Dispatcher Instance 521 ). If parallelization is configured, the task dispatcher 519 will determine how to break up the inputs and parallelize the execution of the Task into multiple Task instances 521 that will receive and split the input. Every started process 523 receives a token as an input parameter. It is a unique identifier that should be sent back to the Task Instance Dispatcher 521 in case the Task aims to report execution progress. The success or failure of a task is determined by simply reading the process 523 exit code from the standard output. More detailed information can be automatically retrieved from log output (error/out) if provided. The Task Instance Dispatcher 521 keeps track of the Process Identifier (PID) that is being executed, which may be used to kill a particular task, and/or monitor information such as consumption of memory, implementations for different operating system flavors such as Windows, UNIX, and Solaris, etc. The Workflow Repository 525 is responsible for saving Workflow/Process metadata 527 (Task Sequencing, Task Parameters, Task Data Split Configuration, etc.), Execution History (PIDS, Start/End Date, Duration, User, Memory, CPU, etc.), logs and the dispatchers' memory state. The repository 525 is by no means a single-point of failure, as it can run with replication and failover enabled. Any changes in the control framework (CF) 501 state is kept in memory but is also immediately flushed to the repository 525 to prevent metadata loss in case a CF 501 instance becomes unavailable. The control framework 501 operates a database engine 529 to manage the Workflow Repository 525

[0038] The control framework (CLI) module 505 is responsible for allowing command line interaction with the Control Framework 501 by default. This allows the user to change configurations, start on-demand workflows, retrieve monitoring information, among other functions. A REST interface also provides the same operations that the CLI interface 505 does and will enable interoperability with other systems (e.g., workflow and monitoring tools 507).

[0039] FIG. 6 is a diagram illustrating a cluster configuration of multiple control framework instances according to aspects of embodiments described in the present disclosure. Control Framework 500 is designed to scale and run in cluster mode 600. The simplest cluster configuration is the single instance mode, and a more complex cluster configuration 600 can span to several instances. The workflow execution distribution between control framework instances 500 is based on the available resources of the instances within the cluster 600. One workflow will preferably execute completely in a single instance if possible. However, when necessary the workflow can have its tasks and task instances distributed on several nodes. Execution of the workflows may be stored and managed in the Workflow Repository 625.

[0040] An exemplary task distribution algorithm may be described as follows:

[0041] Every control framework instance periodically (e.g., every 10 seconds) polls the workflow repository 625 for tasks to execute. In order for an instance to start a task, the following requirements should be met:

• The control framework instance 500 can successfully insert heartbeats in the repository 625;

• The control framework instance 500 has sufficient resources to execute the task; and

• No other control framework instance 500 has taken the task.

[0042] This algorithm runs just before any task execution, even for tasks that are part of the same workflow. Additional configurations may also be used. This may affect which control framework instance 500 will take a specific task for execution. For example:

• control framework instance priority - This allows instances 500 to be configured as preferable to run particular tasks, typically this decision may be due to better hardware or network infrastructure;

• workflow affinity - this allows a specific workflow to be fully executed in a single instance 500, if possible.

[0043] It is important to note that the control framework cluster 600 has no master nodes. Each control framework instance 500 joined in the cluster 600 has the same role.

[0044] FIG. 7 provides sample code that illustrates how a single control framework instance 500 can coordinate the execution of a workflow that includes three tasks. The pseudo workflow definition shown in FIG. 7 is responsible for defining how the tasks should be executed.

[0045] Control Framework Sample Workflow Configuration

[0046] In combination with the code provided in FIG. 7 and the illustration in FIG. 4, A task, based on its input data could be parallelized in two task instances of a first third-party app 403 (maximum), as defined in the workflow definition. Another task can also be parallelized in three task instances of second third-party app 405, and a third task could run on a single instance of third third-party app 407. Memory allocation settings can also be provided. In a case of a Java-implemented task, it may be passed as the Xmx option through command line. Similarly, Windows would use JobObjects, and Linux would use ulimit. The OS command line interface component will correctly translate the task configuration into command line parameters according to the task type.

[0047] The control framework according to embodiment of this disclosure provide many improved features over existing solutions including:

[0048] Native Command Line Interface

[0049] Faster than any other type of process interface, this feature also requires less CPU and RAM. Command line interface is available even on an OS with minimal footprint (e.g. DSL, Arch, TinyLinux, etc.). Logging API's can be used but are not required to configure log capturing, since the default behavior of the control framework is to capture logs by reading the process standard output continuously. The command line interface also enables simple native task status retrieval by utilizing native process exit code. Other third-party tools can manage control framework through a REST interface.

[0050] Plug-in Architecture

[0051] The REST interface allows any third-party tool to perform operations or retrieve any monitoring or traceability information from control framework. Any third-party tool that can invoke client requests through a REST interface is able to integrate with the disclosed control framework.

[0052] Scalability

[0053] The disclosed control framework can run in cluster mode (workflow execution load is distributed on different instances), which means it can scale horizontally. Control framework can use as much resources as available for a particular instance, which means it can also scale vertically.

[0054] Interoperabilitv/Cross-Platform

[0055] The OS command interface (as described in FIG. 5), allows control framework to run on any Operating System by allowing different implementations to be used according to the platform which the control framework instance is ruing. In the same cluster, each instance of control framework can run on a different OS. Control framework is also compatible with software containerization platforms (e.g., Docker) providing command line interface that allows commands to be executed in newly or existing isolated containers.

[0056] Remote Execution

[0057] Ability to start process on remote machines (e.g., by using WMI/Ps£xec on Windows and SSH on Linux-based OS).

[0058] BMPN Notation Compatibility

[0059] A task workflow execution can be considered similarly to business process realization. BMPN is compatible with the workflow notation utilized by the disclosed control framework meaning MBPN workflows can be imported to control framework as a starting point for further customization. It can also be enriched with additional parameters and configurations.

[0060] Workflow and Process Management

[0061] Workflow management operations such as start, stop, pause, resume, retry, step-by-step execution, simulation, scheduling, history (including archiving and compression), traceability (Inputs, Outputs, Time, Data Identification and Lineage). Process management and operations such as Start, Kill, Retry, Input Parameters, Outputs, Prioritization and User Impersonation. Process data traceability is achieved by identifying inputs that are passed to the process as parameters, which makes it possible to perform data lineage to trace the transformed/processed data, manage process metadata, configure and calculate task input parameters. Process monitoring is provided through progress notifications provided by the executing process and may be expanded to include CPU and memory consumption monitoring. Process parallelization for data may be achieved by splitting inputs based on data attributes (creation date, name, range, etc.). For instance, if a process is receiving a list of files as an input for processing, a control framework instance according to this disclosure may manage the parallelization of execution by splitting the file list and submitting the subsets to different processes. Process log management and history may be managed including Log archiving, compression and indexing. Process execution history including process sequence, date and time, duration, PID, progress, process exit code as well as other

information may be retrieved from the workflow repository at any time. Interoperability is thereby provided between various workflow tools. Further, versioning of process command line properties and workflow configurations may be used to further manage workflow execution.

[0062] Performance Management

[0063] Memory management allows allocating a slice of the available OS memory for control framework instance(s), and further allows defining the reserved memory for each process execution thread as part of a workflow (Windows - JobObjects, Linux - ulimit) CPU throttling is supported by embodiments of the disclosed control framework, through internally used OS specific utilities for this purpose (Windows - Process Affinity, Linux -cpulimit). The workflow scheduler component can also be enabled to work as a capacity scheduler, allowing workflows to be executed based on the available resources on the server.

[0064] End-to-End Traceabilitv of Process Execution

[0065] Control framework is a generic platform for process execution chaining (i.e., workflow of processes), with minimal requirements needed to manage tasks. Accordingly, control framework instances described herein provide management of the whole lifecycle of a workflow of processes. These control framework instances also include process information traceability such as inputs, outputs and historical information including, but not limited to CPU and memory consumption.

[0066] Process Parallelization through Command Line

[0067] Native task parallelization mechanism for an executable that can read command line parameters.

[0068] Small Footprint

[0069] Control framework instances is operable on small footprint hardware, which is enabled by the efficient underlying command line process interface, which in turn requires less RAM and CPU in order to inter-communicate with executing processes.

[0070] Simple Cluster

[0071] Control framework can run in a cluster mode by providing a simple algorithm for task distribution and requiring no master node (every node has the same role), which eliminates complexity.

[0072] Resource Management

[0073] The control framework cluster is responsible for distributing the load between the control framework instances and every control framework instance is responsible for optimizing and limiting the resource usage on its operating system. This allows better resource utilization without running into problems such as Out of Memory, Excessive Pagination, and Swapping.

[0074] The proposed framework can be used with any Siemens software platform in order to manage tasks workflows execution.

[0075] Different approach for Coordinating Workflow Execution, third party workflow management system controls tasks in parallel, providing parameters to the workflow while receiving log information from each workflow. The third-part workflow monitoring tool includes functionality for workflow management, scheduling and monitoring.

FIG. 8 is a block diagram of a computer system that may be used to implement aspects of embodiments of a control framework according to the present disclosure. FIG. 8 illustrates an exemplary computing environment 800 within which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 810 and computing environment 800, are known to those of skill in the art and thus are described briefly here.

As shown in FIG. 8, the computer system 810 may include a communication mechanism such as a system bus 821 or other communication mechanism for communicating information within the computer system 810. The computer system 810 further includes one or more processors 820 coupled with the system bus 821 for processing the information.

The processors 820 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general-purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.

Continuing with reference to FIG. 8, the computer system 810 also includes a system memory 830 coupled to the system bus 821 for storing information and instructions to be executed by processors 820. The system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 831 and/or random-access memory (RAM) 832. The RAM 832 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 831 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820. A basic input/output system 833 (BIOS) containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in the ROM 831. RAM 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820. System memory 830 may additionally include, for example, operating system 834, application programs 835, other program modules 836 and program data 837.

The computer system 810 also includes a disk controller 840 coupled to the system bus 821 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 841 and a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive). Storage devices may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).

The computer system 810 may also include a display controller 865 coupled to the system bus 821 to control a display or monitor 866, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes input interface 860 and one or more input devices, such as a keyboard 862 and a pointing device 861 , for interacting with a computer user and providing information to the processors 820. The pointing device 861 , for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processors 820 and for controlling cursor movement on the display 866. The display 866 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 861. In some embodiments, an augmented reality device 867 that is wearable by a user, may provide input/output functionality allowing a user to interact with both a physical and virtual world. The augmented reality device 867 is in communication with the display controller 865 and the user input interface 860 allowing a user to interact with virtual items generated in the augmented reality device 867 by the display controller 865. The user may also provide

gestures that are detected by the augmented reality device 867 and transmitted to the user input interface 860 as input signals.

The computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. Such instructions may be read into the system memory 830 from another computer readable medium, such as a magnetic hard disk 841 or a removable media drive 842. The magnetic hard disk 841 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

As stated above, the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term "computer readable medium" as used herein refers to any medium that participates in providing instructions to the processors 820 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 841 or removable media drive

842. Non-limiting examples of volatile media include dynamic memory, such as system memory 830. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 821. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

The computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 880. Remote computing device 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810. When used in a networking environment, computer system 810 may include modem 872 for establishing communications over a network 871 , such as the Internet. Modem 872 may be connected to system bus 821 via user network interface 870, or via another appropriate mechanism.

Network 871 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 810 and other computers (e.g., remote computing device 880). The network 871 may be wired, wireless or a combination thereof. Wred connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871.

[0076] An executable application, as used herein, comprises code or machine-readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.

[0077] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.

[0078] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.

[0079] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof.