Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020114570 - DISTRIBUTED COMPUTATION FOR REAL-TIME OBJECT DETECTION AND TRACKING

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

What is claimed is:

1. A method for tracking a location of an object in a series of frames of a video stream, the method comprising:

selecting, from a set of two or more nodes available for object detection, a first node; sending a first current frame from the series of frames to the first node, for detection of a first object in the frame;

receiving, from the first node, object detection information for the first object;

subsequently to receiving the object detection information for the first object from the first node,

selecting, from the set of two or more nodes available for object detection, a second node, and sending a second current frame from the series of frames to the second node, for an updated detection of the first object; sending each of two or more frames following the second current frame to respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes; and

receiving, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

2. The method of claim 1 , further comprising, subsequent to sending the two or more frames following the second current frame to respective tracking nodes:

receiving updated object detection information for the first object, from the second node;

selecting, from the set of two or more nodes available for object detection, a third node, and sending a third current frame from the series of frames to the third node, for further updated detection of the first object.

3. The method of claim 2, wherein sending each of the two or more frames following the second current frame to respective tracking nodes, comprises sending every frame between the second current frame and the third current frame to tracking nodes.

4. The method of claim 2 or 3, further comprising, subsequent to sending the third current frame to the third node:

sending each of two or more frames following the second current frame to respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

sending updated object modelling information derived from the updated object

detection information to each of the respective tracking nodes; and receiving, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

5. The method of any of claims 1-4, wherein sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes comprises sending the object modelling information to each of the respective tracking nodes along with the respective frame.

6. The method of any of claims 1-4, wherein sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes comprises sending the object modelling information to all nodes in the set of two or more nodes available for tracking.

7. The method of any of claims 1-6, wherein the method comprises maintaining a single set of nodes available for either detection or tracking, wherein said maintaining comprises removing from the set each node selected for detection or tracking while that node performs the respective detection or tracking and returning to the set each node selected for detection and tracking when the node completes its respective detection or tracking.

8. The method of any of claims 1-6, wherein the method comprises maintaining differing sets of nodes available for detection and tracking, respectively, wherein said maintaining comprises removing from the respective set each node selected for detection or tracking while that node performs the respective detection or tracking and returning to the respective set each node selected for detection and tracking when the node completes its respective detection or tracking.

9. The method of any of claims 1 -8, wherein each selecting of a node for object detection is based on one or more metrics for each of the nodes in the set of two or more nodes available for detection, wherein the one or more metrics include or are based on any of the following:

a battery status for the respective node;

a measure of processing resources available at the respective node; a count of previous object detection tasks completed by the respective node;

a ratio of objection detection tasks completed by the respective node to a number of object detection tasks assigned to the respective node.

10. The method of any of claims 1 -9, wherein each selecting of a node for object tracking is based on one or more metrics for each of the nodes in the set of two or more nodes available for tracking, wherein the one or more metrics include or are based on any of the following:

a battery status for the respective node;

a measure of processing resources available at the respective node;

a count of previous object tracking tasks completed by the respective node;

a ratio of object tracking tasks completed by the respective node to a number of object tracking tasks assigned to the respective node.

1 1 . An arrangement of one or more nodes, each of the one or more nodes comprising a processing circuit and an associated memory comprising program instructions for execution by the respective processing circuit, the program instructions being configured to track a location of an object in a series of frames of a video stream by:

selecting, from a set of two or more nodes available for object detection, a first node; sending a first current frame from the series of frames to the first node, for detection of a first object in the frame;

receiving, from the first node, object detection information for the first object;

subsequently to receiving the object detection information for the first object from the first node,

selecting, from the set of two or more nodes available for object detection, a second node, and sending a second current frame from the series of frames to the second node, for an updated detection of the first object; sending each of two or more frames following the second current frame to respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes; and

receiving, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

12. The arrangement of claim 1 1 , wherein the program instructions are further configured to cause the nodes to, subsequent to sending the two or more frames following the second current frame to respective tracking nodes:

receive updated object detection information for the first object, from the second node; and

select, from the set of two or more nodes available for object detection, a third node, and sending a third current frame from the series of frames to the third node, for further updated detection of the first object.

13. The arrangement of claim 12, wherein the program instructions are configured so that sending each of the two or more frames following the second current frame to respective tracking nodes comprises sending every frame between the second current frame and the third current frame to tracking nodes.

14. The arrangement of claim 12 or 13, wherein the program instructions are further configured to cause the nodes to, subsequent to sending the third current frame to the third node:

send each of two or more frames following the second current frame to respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

send updated object modelling information derived from the updated object detection information to each of the respective tracking nodes; and

receive, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

15. The arrangement of any of claims 1 1 -14, wherein the program instructions are configured so that sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes comprises sending the object modelling information to each of the respective tracking nodes along with the respective frame.

16. The arrangement of any of claims 1 1 -14, wherein the program instructions are configured so that sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes comprises sending the object modelling information to all nodes in the set of two or more nodes available for tracking.

17. The arrangement of any of claims 1 1 -16, wherein the program instructions are further configured to cause the nodes to maintain a single set of nodes available for either detection or tracking, wherein said maintaining comprises removing from the set each node selected for detection or tracking while that node performs the respective detection or tracking and returning to the set each node selected for detection and tracking when the node completes its respective detection or tracking.

18. The arrangement of any of claims 1 1 -16, wherein the program instructions are configured so that the nodes maintain differing sets of nodes available for detection and tracking, respectively, wherein said maintaining comprises removing from the respective set each node selected for detection or tracking while that node performs the respective detection or tracking and returning to the respective set each node selected for detection and tracking when the node completes its respective detection or tracking.

19. The arrangement of any of claims 1 1 -18, wherein the program instructions are configured so that each selecting of a node for object detection is based on one or more metrics for each of the nodes in the set of two or more nodes available for detection, wherein the one or more metrics include or are based on any of the following:

a battery status for the respective node;

a measure of processing resources available at the respective node;

a count of previous object detection tasks completed by the respective node;

a ratio of objection detection tasks completed by the respective node to a number of object detection tasks assigned to the respective node.

20. The arrangement of any of claims 1 1 -19, wherein the program instructions are configured so that each selecting of a node for object tracking is based on one or more metrics for each of the nodes in the set of two or more nodes available for tracking, wherein the one or more metrics include or are based on any of the following:

a battery status for the respective node;

a measure of processing resources available at the respective node;

a count of previous object tracking tasks completed by the respective node;

a ratio of object tracking tasks completed by the respective node to a number of object tracking tasks assigned to the respective node.

21. An arrangement of one or more nodes, the one or more nodes being configured to: select, from a set of two or more nodes available for object detection, a first node; send a first current frame from the series of frames to the first node, for detection of a first object in the frame;

receive, from the first node, object detection information for the first object;

subsequently to receiving the object detection information for the first object from the first node,

select, from the set of two or more nodes available for object detection, a second node, and sending a second current frame from the series of frames to the second node, for an updated detection of the first object; send each of two or more frames following the second current frame to

respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

send object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes; and

receive, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

22. The arrangement of claim 21 , wherein the one or more nodes are configured to carry out a method according to any of claims 2-10.

23. A computer program product comprising computer program instructions for tracking a location of an object in a series of frames of a video stream, the computer program instructions comprising instructions for:

selecting, from a set of two or more nodes available for object detection, a first node; sending a first current frame from the series of frames to the first node, for detection of a first object in the frame;

receiving, from the first node, object detection information for the first object;

subsequently to receiving the object detection information for the first object from the first node,

selecting, from the set of two or more nodes available for object detection, a second node, and sending a second current frame from the series of frames to the second node, for an updated detection of the first object;

sending each of two or more frames following the second current frame to respective tracking nodes, wherein sending each frame to a respective tracking node comprises selecting the respective tracking node from a set of two or more nodes available for tracking;

sending object modelling information indicating location and/or classification of one or more objects derived from the object detection information to each of the respective tracking nodes; and

receiving, from each of the respective tracking nodes, tracking information for the first object for the frame sent to the respective tracking node.

24. A computer-readable medium comprising, stored thereupon, the computer program product of claim 23.