Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2016085717 - SYSTÈMES ET PROCÉDÉS POUR RÉALISER UNE LOCALISATION ET UNE CARTOGRAPHIE SIMULTANÉES EN UTILISANT DES SYSTÈMES À VISION ARTIFICIELLE

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

WHAT IS CLAIMED IS:

1 . A mobile robot (10) configured to navigate an operating environment, comprising:

a body (100) having a top surface (108);

a drive (1 1 1 ) mounted to the body (100);

a recessed structure (130) beneath the plane of the top surface (108) near a geometric center of the body (100);

a controller circuit (605) in communication with the drive (1 1 1 ), wherein the controller circuit (605) directs the drive (1 1 1 ) to navigate the mobile robot (10) through an environment using camera-based navigation system (120); and

a camera (125) including optics defining a camera field of view and a camera optical axis, wherein:

the camera (125) is positioned within the recessed structure (130) and is tilted so that the camera optical axis is aligned at an acute angle of

30-40 degrees above a horizontal plane in line with the top surface (108) and is aimed in a forward drive direction of the robot body (100),

the field of view of the camera (125) spans a frustum of 45-65 degrees in the vertical direction, and

the camera (125) is configured to capture images of the operating environment of the mobile robot (10).

2. The mobile robot (10) of claim 1 , wherein the camera (125) is protected by a lens cover (135) aligned at an acute angle with respect to the optical axis of the camera.

3. The mobile robot (10) of claim 2, wherein the lens cover (135) is set back relative to an opening of the recessed structure (130) and an acute angle with respect to the optical axis of the camera (125) that is closer to perpendicular than an angle formed between a plane defined by the top surface (108) and the optical axis of the camera (125).

4. The mobile robot (10) of claim 2, wherein the acute angle is between 15 and 70 degrees.

5. The mobile robot (10) of claim 3, wherein the angle formed between a plane defined by the opening in the recessed structure (130) and the optical axis of the camera (125) ranges between 10 and 60 degrees.

6. The mobile robot (10) of claim 1 , wherein the camera (125) field of view is aimed at static features located in a range of 3 feet to 8 feet from a floor surface at a distance of 3 feet to 10 feet from the static features.

7. The mobile robot (10) of claim 6, wherein the camera images contain about 6-12 pixels per inch and features at the top of the image move upward between successive images more quickly than the speed at which the mobile robot moves and features at the bottom of the image move downward between successive images more slowly than the speed at which the mobile robot moves and wherein the controller is configured to determine the speed of the mobile robot and location of features in the image in identifying disparity between successive images.

8. The mobile robot (10) of claim 7 wherein the mobile robot (10) moves at a velocity of 220 mm per second to 450 mm per second and features lower than 45 degrees relative to the horizon will track slower than approximately 306mm per second and features higher than 45 degrees will track faster than 306mm per second.

9. The mobile robot (10) of claim 1 , wherein the body (10) further contains: a memory (625) in communication with the controller circuit (605); and an odometry sensor system in communication with the drive (1 1 1 ),

wherein the memory further contains a visual measurement application (630), a simultaneous location and mapping (SLAM) application (635), a landmarks database (650), and a map of landmarks (640),

wherein the controller circuit (605) directs a processor (610) to:

actuate the drive (1 1 1 ) and capture odometry data using the odometry sensor system;

acquire a visual measurement by providing at least the captured odometry information and the captured image to the visual measurement application (630);

determine an updated robot pose within an updated map of landmarks by providing at least the odometry information, and the visual measurement as inputs to the SLAM application (635); and

determine robot behavior based upon inputs including the updated robot pose within the updated map of landmarks.

10. The mobile robot (10) of claim 9, wherein the landmarks database (650) comprises:

descriptions of a plurality of landmarks;

a landmark image of each of the plurality of landmarks and an associated landmark pose from which the landmark image was captured; and

descriptions of a plurality of features associated with a given landmark from the plurality of landmarks including a 3D position for each of the plurality of features associated with the given landmark.

1 1 . The mobile robot (10) of claim 10, wherein the visual measurement application (630) directs the processor (610) to:

identify features within an input image;

identify a landmark from the landmark database (650) in the input image based upon the similarity of the features identified in the input image to matching features associated with a landmark image of the identified landmark in the landmark database (650); and

estimate a most likely relative pose by determining a rigid transformation of the 3D structure of the matching features associated with the identified landmark that results in the highest degree of similarity with the identified features in the input image, where the rigid transformation is determined based upon an estimate of relative pose and the acute angle at which the optical axis of the camera (125) is aligned above the direction of motion of the mobile robot (10).

12. The mobile robot (10) of claim 1 1 , wherein identifying a landmark in the input image comprises comparing unrectified image patches from the input image to landmark images within the landmark database (650).

13. The mobile robot (10) of claim 1 1 , wherein the SLAM application (635) directs the processor (610) to:

estimate the location of the mobile robot (10) within the map of landmarks (640) based upon a previous location estimate, odometry data and at least one visual measurement; and

update the map of landmarks (640) based upon the estimated location of the mobile robot (10), the odometry data, and the at least one visual measurement.

14. The mobile robot (10) of claim 9, wherein the visual measurement application (630) further directs the processor (610) to generate new landmarks by:

detecting features within images in a sequence of images;

identifying a set of features forming a landmark in multiple images from the sequence of images;

estimating 3D structure of the set of features forming a landmark and relative robot poses at the times each of the multiple images is captured using the identified set of features forming the landmark in each of the multiple images;

recording a new landmark in the landmark database (650), where recording the new landmark comprises storing: an image of the new landmark, at least the set of features forming the new landmark, and the 3D structure of the set of features forming the new landmark; and

notify the SLAM application (635) of the creation of a new landmark.

15. The mobile robot (10) of claim 14, wherein the SLAM application (635) directs a processor (610) to:

determine a landmark pose as the pose of the mobile robot (10) at the time the image of the new landmark stored in the landmark database (650) is captured; and

recording the landmark pose for the new landmark in the landmark database (650).

16. A mobile robot (10) configured to navigate an operating environment, comprising:

a body (100) containing:

a drive (1 1 1 ) configured to translate the robot (10) in a direction of motion; a machine vision system (125) comprising a camera (125) that captures images of the operating environment of the mobile robot (10);

a processor (610);

memory (625) containing a a simultaneous localization and mapping (SLAM) application (635) and a behavioral control application (630):

wherein the behavioral control application (630) directs the processor (610) to:

capture images using the machine vision system (125);

detect the presence of an occlusion obstructing a portion of the field of view of the camera (125) based on the captured images; and generate a notification when an occlusion obstructing the portion of the field of view of the camera (125) is detected; and

wherein the behavioral control application (630) further directs the processor (610) to maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application (635).

17. The mobile robot (10) of claim 16, wherein the occlusion detection data describes portions of images that correspond to portions of the field of view of the camera (125).

18. The mobile robot (10) of claim 16, wherein the occlusion detection data comprises a histogram that identifies different portions of the camera (125) field of view and provides a corresponding frequency with which each portion of the field of view is used by the SLAM application (635) to generate or identify features.

19. The mobile robot (10) of claim 16, wherein detecting the presence of an occlusion further comprises identifying a portion of the camera (125) field of view in the captured images that is not utilized by the SLAM application (635) for a threshold number of images.

20. The mobile robot (10) of claim 19, wherein detecting the threshold number of images is between at least 1 -10 images per 300mm of robot travel for a mobile robot (10) traveling at a rate of speed between 220 mm per second and 450 mm per second.

21 . The mobile robot (10) of claim of 16, wherein the behavioral control application (630) further directs a processor (610) to identify a certain percentage of the camera (125) field of view is occluded prior to generating the notification.

22. The mobile robot (10) of claim of 21 , wherein the occluded percentage of the camera (125) field of view is between 30 and 75 percent.

23 . The mobile robot (10) of claim 16, wherein the notification is one of a displayed message, text message, voicemail, or electronic mail message.

24. The mobile robot (10) of claim 16, wherein the notification is one of an alert, sound, or indicator light provided by the mobile robot (10).

25. The mobile robot (10) of claim 16, wherein the behavioral control application (630) further directs a processor (610) to send the notification of the occlusion to a server.

26. The mobile robot (10) of claim 16, wherein the occlusion is detected within a first portion of a set of captured images.

27. The mobile robot (10) of claim 26, wherein the occlusion is a semi-transparent occlusion that blurs the first portion of the set of captured images that corresponds to the obstructed portion of the field of view.

28. The mobile robot (10) of claim 26, wherein the occlusion is an opaque occlusion that completely occludes the first portion of the set of captured images that corresponds the obstructed portion of the field of view.

29. The mobile robot (10) of claim 16, wherein the camera (125) is positioned within a recessed structure (130) under the top surface (108) of the mobile robot (10) and is tilted so that the camera (125) optical axis is aligned at an acute angle of 30-40 degrees above a horizontal plane in line with the top surface (108) and is aimed in a forward drive direction of the robot body (100), and wherein the field of view of the camera (125) spans a frustum of 45-65 degrees in the vertical direction.

30. The mobile robot (10) of claim 29, wherein the camera (125) is protected by a lens cover (135) aligned at an acute angle with respect to the optical axis of the camera (125).