Einige Inhalte dieser Anwendung sind momentan nicht verfügbar.
Wenn diese Situation weiterhin besteht, kontaktieren Sie uns bitte unterFeedback&Kontakt
1. (WO2019045711) SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) DEVICES WITH SCALE DETERMINATION AND METHODS OF OPERATING THE SAME
Anmerkung: Text basiert auf automatischer optischer Zeichenerkennung (OCR). Verwenden Sie bitte aus rechtlichen Gründen die PDF-Version.

WHAT IS CLAIMED IS:

1. A device (14) comprising:

a camera (410);

a processor (420); and

a non-volatile memory (430) coupled to the processor (420) and comprising computer readable program code (440) that when executed by the processor (420) causes the processor (420) to perform operations comprising:

receiving (220), corresponding to a plurality of images (16a-16d) of an environment (10) comprising an object (12), a respective plurality of first distances (18a- 18d) between the camera (410) and the object (12) in the environment (10);

calculating (230), for the plurality of images (16a-16d), using a simultaneous localization and mapping (SLAM) algorithm, a plurality of second distances between the camera (410) and the object (12) in a digital 3-Dimensional (3D) model of the environment (10);

calculating (240) a plurality of ratios corresponding to the plurality of images

(16a-16d) based on respective ones of the plurality of first distances (18a-18d) and respective ones of the second distances;

determining (250) a scale of the 3D model based on the plurality of ratios; and creating (260) a scaled digital 3D model based on the 3D model and the determined scale of the 3D model, wherein distances and sizes in the scaled 3D model correspond to actual distances and sizes of the environment (10).

2. The device (14) of Claim 1,

wherein the operations further comprise controlling (210) the camera (410) to produce the plurality of images (16a- 16d) of the environment (10) comprising the object (12) using an autofocus algorithm to control a focus of the camera (410), and

wherein the receiving (220) of the plurality of first distances (18a- 18d) comprises receiving a plurality of focus distances from the autofocus algorithm.

3. The device (14) of any one of Claims 1 through 2,

wherein the plurality of images (16a-16d) is a first plurality of images (16a-16d), and wherein the operations further comprise:

determining (270) an actual distance from the camera (410) to the object (12) based on the scaled 3D model; and

controlling (280) the camera (410) to produce a second image using the actual distance to control a focus of the camera (410).

4. The device (14) of any one of Claims 1 through 3,

wherein the operations further comprise determining that at least a predetermined number of the plurality of images (16a-16d) are acceptable and excluding images (16a-16d) from the plurality of images (16a-16d) that are not acceptable, and

wherein the at least a predetermined number of the plurality of images (16a-16d) are determined to be acceptable based on at least one of:

a determination that a location of the camera (410) may be calculated;

a determination that the object (12) is within a view of each of the at least a predetermined number of the plurality images (16a-16d); and/or

a determination that a difference between the location of the camera (410) for ones of the at least a predetermined number of the plurality of images (16a- 16d) and a location of the camera (410) for a previous acceptable image is greater than a threshold.

5. The device (14) of any one of Claims 1 through 4, wherein the determining (250) of the scale of the 3D model comprises calculating an average of the plurality of ratios.

6. The device ( 14) of Claim 5, wherein the calculating of the average of the plurality of ratios comprises:

calculating (251) a first average of the plurality of ratios;

calculating (252) a deviation from the first average for each of the plurality of ratios; and calculating (254) a second average of ones of the plurality of ratios with that deviate from the first average by less than a threshold value.

7. The device (14) of Claim 6, wherein the threshold value is a predetermined multiple of a standard deviation of the plurality of ratios.

8. The device (14) of Claim I, further comprising a Time of Flight (TOF) sensor (450) that is configured to provide the plurality of first distances (18a-18d).

9. A method comprising:

receiving (220), corresponding to a plurality of images (16a-16d) of an environment (10) comprising an object (12), a respective plurality of first distances (18a-18d) between the object ( 12) in the environment ( 10) and a camera (410) that was used for capturing the plurality of images (16a-16d);

calculating (230), for the plurality of images (16a-16d), using a simultaneous localization and mapping (SLAM) algorithm, a plurality of second distances between the camera (410) and the object (12) in a digital 3-Dimcnsional (3D) model of the environment (10);

calculating (240) a plurality of ratios corresponding to the plurality of images (16a-16d) based on respective ones of the plurality of first distances ( 18a- 18d) and respective ones of the second distances;

determining (250) a scale of the 3D model based on the plurality of ratios; and creating (260) a scaled digital 3D model based on the 3D model and the determined scale of the 3D model, wherein distances and sizes in the scaled 3D model correspond to actual distances and sizes of the environment (10).

10. The method of Claim 9, further comprising:

controlling (210) the camera (410) to produce the plurality of images (I6a-16d) of the environment (10) comprising the object (12) using an autofocus algorithm to control a focus of the camera (410),

wherein the receiving (220) of the plurality of first distances (18a-l 8d) comprises receiving a plurality of focus distances from the autofocus algorithm.

11. The method of any one of Claims 9 through 10,

wherein the plurality of images (I6a-16d) is a first plurality of images (16a-16d), and

wherein the operations further comprise:

determining (270) an actual distance from the camera (410) to the object (12) based on the scaled 3D model; and

controlling (280) the camera (410) to produce a second image using the actual distance to control a focus of the camera (410).

12. The method of any one of Claims 9 through 11 , the operations further comprising: determining that at least a predetermined number of the plurality of images (16a-16d) are acceptable and excluding images (I6a-I6d) from the from the plurality of images (16a-16d) that are not acceptable,

wherein the at least a predetermined number of the plurality of images (16a-16d) are determined to be acceptable based on at least one of:

a determination that a location of the camera (410) may be calculated;

a determination that the object ( 12) is within a view of each of the at least a predetermined number of the plurality images (16a-16d); and/or

a determination that a difference between the location of the camera (410) for ones of the at least a predetermined number of the plurality of images ( 16a- 16d) and a location of the camera (410) for a previous acceptable image is greater than a threshold.

13. The method of any one of Claims 9 through 12, wherein the determining (250) of the scale of the 3D model comprises calculating an average of the plurality of ratios.

14. The method of Claim 13, wherein the calculating of the average of the plurality of ratios comprises:

calculating (251) a first average of the plurality of ratios;

calculating (252) a deviation from the first average for each of the plurality of ratios; and calculating (254) a second average of ones of the plurality of ratios that deviate from the first average by less than a threshold value.

15. The method of Claim 14, wherein the threshold value is a predetermined multiple of a standard deviation of the plurality of ratios.

16. The method of Claim 9, wherein the plurality of first distances ( 18a- 18d) are received from a Time of Flight (TOF) sensor (4S0).

17. A computer program product, the computer program product comprising a non- transitory computer readable storage medium (430) having computer readable program code (440) embodied in the medium (430) that when executed by a processor (420) causes the processor (420) to perform the operations of the method of any of Claims 9 through 16.

18. The computer program product of Claim 17,

wherein the operations further comprise controlling (210) the camera (410) to produce the plurality of images (16a- 16d) of the environment (10) comprising the object (12) using an autofocus algorithm to control a focus of the camera (410), and

wherein the receiving (220) of the plurality of first distances (18a-18d) comprises receiving a plurality of focus distances from the autofocus algorithm.

19. The computer program product of any one of Claims 17 through 18,

wherein the plurality of images (I6a-16d) is a first plurality of images (16a-16d), and wherein the operations further comprise:

determining (270) an actual distance from the camera (410) to the object (12) based on the scaled 3D model; and

controlling (280) the camera (410) to produce a second image using the actual distance to control a focus of the camera (410).

20. The computer program product of any one of Claims 17 through 19, wherein the determining (2S0) of the scale of the 3D model comprises:

calculating (251) a first average of the plurality of ratios;

calculating (2S2) a deviation from the first average for each of the plurality of ratios; and calculating (254) a second average of ones of the plurality of ratios with that deviate from the first average by less than a threshold value.