Processing

Please wait...

PATENTSCOPE will be unavailable a few hours for maintenance reason on Tuesday 27.07.2021 at 12:00 PM CEST
Settings

Settings

Goto Application

1. WO2020142036 - INDOOR DRONE NAVIGATION SUPPORT SYSTEM BASED ON MACHINE LEARNING

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

INDOOR DRONE NAVIGATION SUPPORT SYSTEM BASED ON MACHINE LEARNING

TECHNICAL FIELD

The invention relates to systems for providing the positioning of an unmanned aerial vehicle in a setting that is closed to GPS access.

BACKGROUND ART

Unmanned aerial vehicles can be used for inventory control within a storage area. Since the position information obtained from GPS will not be accurate enough in the indoor area, solutions are needed for the positioning of unmanned aerial vehicles in the building. The patent application CN104035091 A discloses a system in which a marker is determined and the position of the unmanned aerial vehicle is determined relative to the marker. The patent application WO18057629A1 discloses an unmanned aerial vehicle whose position is determined based on the signal strength of the data transfer made with three wireless communication access points. The positioning accuracy decreases if there are inner walls and shelves in the setting where positioning is made.

SHORT DESCRIPTION OF THE INVENTION

The present invention relates to a positioning system and method in order to eliminate the above-mentioned disadvantages and bring new advantages to the relevant technical field.

An object of the invention is to provide a system and method for positioning the unmanned aerial vehicle with an increased accuracy.

It is an object of the invention to provide an unmanned aerial vehicle positioning system and method that requires reduced storage space and reduced processing capacity for operation.

The object of the invention is to implement a navigation support system based on machine learning on a hardware platform with low processing and memory capacity. The object of this invention is to develop a system that provides accurate visual navigation under image distortion caused by flight dynamics, sensor noise, and distortion resulting from the perspective difference between the flight vector and the plane navigated.

Another object of the invention is to provide a system and method based on machine learning that performs positioning of unmanned aerial vehicle with increased accuracy even if markers without error correction feature are used.

The present invention is a system developed for positioning an unmanned aerial vehicle in a setting to accomplish all the objects mentioned above and which will be inferred from the following detailed description. Accordingly, at least one primary camera of the unmanned aerial vehicle is arranged to capture at least one image comprising at least one of the original markers arranged in an order along at least one first axis in the said setting in which the unmanned aerial vehicle flies,

at least one processor unit of the unmanned aerial vehicle

- accesses the first marker data set stored in a memory unit, comprising at least one marker image for each marker mentioned,

- generates the first prediction in which the marker in the image belongs to which of the images in the first marker dataset is predicted by applying the first classification algorithm to the image,

- generates the second prediction in which the marker in the image belongs to which of the images in the first marker dataset is predicted by applying the second classification algorithm, the type of which is different from that of the said first classification algorithm, to the image,

- generates the third prediction in which the marker in the image belongs to which of the images in the first marker dataset is predicted by applying the third classification algorithm, the type of which is different from those of the said first classification algorithm and the second classification algorithm, to the image,

- ensures that the marker in the image is matched to the marker image with the highest number of predictions as a result of the said first prediction, the said second prediction, and the said third prediction,

- is configured to access a map stored in the said memory unit to determine the position of the matched marker on the first axis of the map covering the original markers and their position in the setting where the unmanned aerial vehicle flies. Thus, by identifying the marker identity based on the number of predictions in the results of three different classification algorithms, the accuracy of the identification of individual algorithms is increased in case of the possibility of generating incorrect predictions in various situations.

A preferred embodiment of the invention is characterized in that at least a second camera of the unmanned aerial vehicle is arranged to capture at least one image comprising at least one of the original markers arranged in an order along a second axis perpendicular to the first axis, wherein the processor unit

- accesses the second marker data set stored in the memory unit, comprising at least one marker image for each said markers,

- generates the first prediction in which the marker in the image belongs to which of the marker images in the second marker dataset is predicted by applying the first classification algorithm to the image,

- generates the second prediction in which the marker in the image belongs to which of the marker images in the second marker dataset is predicted by applying the second classification algorithm, the type of which is different from that of the said first classification algorithm, to the image,

- generates the third prediction in which the marker in the image belongs to which of the marker images in the second marker dataset is predicted by applying the third classification algorithm, the type of which is different from those of the said first classification algorithm and the second classification algorithm, to the image,

- ensures that the marker in the image is matched to the marker image with the highest number of predictions as a result of the said first prediction, the said second prediction, and the said third prediction,

- is configured to access the map to determine the position of the matched marker on the second axis on the map. Thus, the two-dimensional position of the unmanned aerial vehicle can be determined.

A preferred embodiment of the invention is characterized in that at least a third camera of the unmanned aerial vehicle is arranged to capture at least one image comprising at least one of the original markers arranged in an order along a third axis perpendicular to the first axis and the second axis, wherein the processor unit

- accesses the third marker data set stored in the memory unit, comprising at least one marker image for each marker mentioned,

- generates the first prediction in which the marker in the image belongs to which of the marker images in the first marker dataset is predicted by applying the first classification algorithm to the image,

- generates the second prediction in which the marker in the image belongs to which of the marker images in the second marker dataset is predicted by applying the second classification algorithm, the type of which is different from that of the said first classification algorithm, to the image,

- generates the third prediction in which the marker in the image belongs to which of the marker images in the second marker dataset is predicted by applying the third classification algorithm, the type of which is different from those of the said first classification algorithm and the second classification algorithm, to the image,

- ensures that the marker in the image is matched to the marker image with the highest number of predictions as a result of the said first prediction, the said second prediction, and the said third prediction,

- is configured to access the map to determine the position of the matched marker on the third axis on the map. Thus, the three-dimensional position of the unmanned aerial vehicle can be determined.

Another preferred embodiment of the invention is characterized in that it uses an Ensemble Voting Algorithm in the step of “ensuring the marker in the image to match the marker image having the highest number of predictions in the said first prediction, the said second prediction, and the said third prediction.

Another preferred embodiment of the invention is characterized in that it comprises a user terminal, the unmanned aerial vehicle comprises a communication unit that provides the wireless data exchange between the processor unit and the user terminal, the processor unit being arranged to transmit position information and other information that can be collected from the surrounding area to the user terminal.

Another preferred embodiment of the invention is characterized in that the processor unit is arranged to provide support to a flight control unit in the process of determining navigation by transferring navigation information to the control unit of the unmanned aerial vehicle,

Another preferred embodiment of the invention is characterized in that the said markers comprise a QR code.

Another preferred embodiment of the invention is characterized in that each of the said markers comprises a unique pattern.

Another preferred embodiment of the invention is characterized in that the markers on the first axis are provided in a first color, the markers on the second axis are provided in a second color different from the said first color, and the markers on the third axis are provided in a third color different from the first color and the second color. Thus, it is ensured that the markers on the first axis, on the second axis, and on the third axis are differentiated from each other.

Another preferred embodiment of the invention is characterized in that the first color, the second color, and the third color are selected out of red, blue, and green.

Another preferred embodiment of the invention is characterized in that the first classification algorithm is the Random Forest Algorithm, the second classification algorithm is the K-nearest Neighbors Algorithm, and the third classification algorithm is the Logistic Regression Algorithm.

Another preferred embodiment of the invention is characterized in that a plurality of marker images is provided for each marker and the said marker images are obtained by subjecting the original marker image to at least one distortion process.

Another preferred embodiment of the invention is characterized in that the marker images in the marker data set are compressed.

Another preferred embodiment of the invention is characterized in that the marker images in the marker data set are compressed by using Principal Component Analysis (PCA) method. Thus the system is allowed to operate in memory embedded systems with lower storage capacity.

BRIEF DESCRIPTION OF THE DRAWING

Figure 1 shows the representative view of the unmanned aerial vehicle and the setting in which the unmanned aerial vehicle is positioned.

Figure 2 shows a schematic view of the system.

Figure 3 shows a representative view of the memory unit.

DETAILED DESCRIPTION OF THE INVENTION

In this detailed description, the subject matter of the invention is described by using examples only for a better understanding, which will have no limiting effect.

With reference to Figure 1 , the invention is a system and method for positioning unmanned aerial vehicles (100) (drone) in a setting (400). The subject setting (400) may be an indoor storage space.

Original markers (300) are arranged in an order on the said setting (400). Each marker (300) has a different pattern. In this possible embodiment, the markers (300) are arranged in an order along a first axis (401) on the ceiling (430) of the setting (400) and along axes parallel to this first axis (401). The markers (300) are also arranged in an order along a second axis (402) perpendicular to the first axis (401) on the ceiling (430) and along axes parallel to this axis. In this possible embodiment, a third axis (403) and markers (300) are arranged in an order along the axes parallel to this axis on the sidewalls (420). Thus, the markers (300) are provided on 3 axes perpendicular to each other.

In a possible embodiment, the markers (300) on the first axis (401) and those on the second axis (402) are located on the floor (410).

In a possible embodiment, the storage devices (500) are provided in the setting (400) for placing the objects on the storage racks (510) therein. In addition to the sidewalls (420), the markers (300) are also arranged in an order on the surface of the said storage devices (500) along axes parallel to the third axis (403).

The patterns of the said markers (300) may be a QR code or a different type of pattern. The patterns on the first axis (401) are in a first color, the patterns on the second axis are in a second color, and the patterns on the third axis (403) are in a third color. The first color, the second color, and the third color are different from each other distinguished by the human eye. The first color, the second color, and the third color are preferably selected out of red, blue, and green.

With reference to Figure 2, the unmanned aerial vehicle (100) comprises a flight control unit (120). The flight control unit (120) mentioned herein refers to all components (not shown in the figure) required for flying unmanned aerial vehicle (100). The said components of the flight control unit (120) comprises, but are not limited to, blades, propellers, motors, driver circuits, various sensors, and processors for controlling driver circuits, etc.

The unmanned aerial vehicle (100) comprises a processor unit (1 10) configured to control the flight control unit (120). The said processor unit (110) is a microprocessor. The unmanned aerial vehicle

(100) may also comprise a memory unit (140), which stores the functional modules comprising the command lines executed by the processor unit (110). The memory unit (140) may comprise a permanent type and/or a temporary type memory, or a suitable combination of them. Preferably, a Raspberry Pi device is provided as the processor unit (1 10) and the memory unit (140).

The unmanned aerial vehicle (100) comprises a first camera (131). The said first camera (131) is placed so as to see the markers (300) on the first axis (401) when the unmanned aerial vehicle (100) is in the flying state. The first camera (131) captures an image containing at least one of the markers (300) on the first axis (401) and the axes extending parallel to this axis and transmits it to the processor unit (110). In a possible embodiment of the invention, the unmanned aerial vehicle (100) also comprises a second camera (132). The second camera (132) is also placed to see the markers (300) arranged in an order on the second axis (402), which is in the same plane as the first axis (401), but perpendicular to the first axis (401). The third camera (133) is also placed on the sidewalls (420) to see the markers (300) arranged in an order on the third axis (403) perpendicular to the first axis (401) and the second axis (402), in other words, it extends between the ceiling (430) and the floor (410). To see the sidewalls (420), a plurality of third cameras (133), one camera for each side of the unmanned aerial vehicle (100), may be provided. In a possible embodiment, only the first camera (131) and the third camera (133) are provided, since the markers (300) on the first axis (401) and the second axis (402) are located on the same plane. Thus, both the markers (300) on the first axis (401) and the parallel axes and the markers (300) on the second axis (402) and the parallel axes can be displayed using the first camera (131).

The memory unit (140) comprises a setting map (143). The setting map (143) comprises the unique identification numbers representing the markers (300) and the coordinates on which the marker (300) with this identification number is located in the setting (400). For example, one point in the setting (400) may be considered the origin point for the three axes, and the coordinates may be determined based on this origin. In a case where a lower corner of the room is assumed to be O (0,0,0), the coordinate of a marker C1 on the third axis (403) at a height of 3 units from the ground can be expressed as C1 (x, y, 3).

The memory unit (140) also comprises a marker data set (142). For each marker (300), the marker library (142) comprises at least one marker image associated with the unique identification numbers of the markers (300). The marker data set (142) comprises a plurality of marker images provided for each of the marker (300) for more detail. A plurality of marker images provided for a marker (300) are the versions of an original marker image subjected to various distortion operations. The distortion processes mentioned herein may be blurring and obtaining the view of the image from different angles.

The marker images are stored in the memory unit (140) in a compressed format. More specifically, the marker images are stored in the memory unit (140) by compressing using the Principal Components Analysis (PCA) method. This allows the library to be run on systems with lower processors and lower storage capacity.

In a possible embodiment, the marker dataset (142) comprises the first marker dataset (1421) comprising marker images of the markers (300) arranged in an order on the first axis (401) and along the axes parallel to the said axis; the second marker dataset (1422) comprising marker images of the markers (300) arranged in an order on the second axis (402) and along the axes parallel to the said axis; the third marker dataset (1423) comprising marker images of the markers (300) arranged in an order on the third axis (403) and along the axes parallel to the third axis (403).

The memory unit (140) comprises a positioning module. The processor unit (1 10) ensures determining the current position of the unmanned aerial vehicle (100) by running this module comprising functional submodules consisting of command lines.

The navigation module (141), which is run by the processor unit (1 10), enables the first camera (131) to capture a first image showing a marker (300) as an input. By applying the first classification algorithm to the said first image, it generates the first prediction in which the marker (300) in the image belongs to which of the images in the marker dataset (142), particularly, the first marker dataset (1421), is predicted. The first prediction may be, but not limited to, an expression such as "it is marker C1 with a probability of 90%" or a numerical output meaning that. The navigation module (141) then applies a second classification algorithm to the first image to generate a second prediction in which the marker (300) in the image belongs to which of the images in the first marker dataset (1421) is predicted. The second prediction may be, for example, an expression such as "it is marker A1 with a probability of 80%".

The navigation application also applies a third classification algorithm to the first image to generate the third prediction in which the image belongs to which of the images in the first marker dataset (1421) is predicted. The third prediction may be, for example, an expression such as "it is marker A2 with a probability of 50%". The fourth Ensemble Voting Algorithm selects the most accurate classification out of these predictions using the selection method. Thus, the navigation support module (141) evaluates the first prediction, the second prediction, and the third prediction, and selects the marker image with the highest prediction value. According to the above examples, the selected marker image will be the marker image of the marker A1. By determining the coordinate of the marker in the marker image

selected from the setting map (143), it determines the position of the unmanned aerial vehicle on the first axis (401) or on the axes parallel to the first axis (401).

Similar to the above operations, the navigation module (141) then applies the first classification algorithm, the second classification algorithm, and the third classification algorithm to the second image captured by the second camera (132) to generate the first prediction, the second prediction, and the third prediction in which the marker in the image belongs to which of the images in the second marker dataset (1422) is predicted. By selecting the mostly predicted marker image using the Ensemble Voting Algorithm, the coordinate of the marker in the selected marker image is determined on the setting map. Thus, the coordinate of the marker in the second image on the second axis (402) or the axes parallel to the second axis (402) is determined.

Similar to the above operations, the navigation module (141) then applies the first classification algorithm, the second classification algorithm, and the third classification algorithm to the third image captured by the third camera (133) to generate the first prediction, the second prediction, and the third prediction in which the marker in the image belongs to which of the images in the third marker dataset (1423) is predicted. By selecting the mostly predicted marker image using the Ensemble Voting Algorithm, the coordinate of the marker in the selected marker image is determined on the setting map (143). Thus, the coordinate of the marker in the third image on the third axis (403) or the axes parallel to the third axis (403) is determined.

When the identities and coordinates of the markers in the first image, the second image, and the third image are determined, the position of the unmanned aerial vehicle (100) in the 3-Dimensional coordinate system within the setting (400) is determined.

The first classification algorithm, the second classification algorithm, the third classification algorithm, and the fourth classification algorithm are selected out of Random Forest Algorithm, K-nearest Neighbors Algorithm, Logistic Regression Algorithm, and Ensemble Voting Algorithm.

In a possible embodiment, the memory unit (140) comprises an object library (1424). The object library (1424) comprises the images and specific identification numbers matching these images with respect to the identifiers of the objects on the objects in the storage device (500). The processor unit (1 10) determines the matching object marker (350) in the object library (1424) by applying the first classification algorithm, the second classification algorithm, and the third classification algorithm to the image comprising an object marker (350), which is captured by a predetermined camera (130).

The objects (600) mentioned herein may be a product, box, file, etc.

The processor unit (1 10) of the unmanned aerial vehicle (100) also comprises a communication unit (150) arranged to allow data exchange. The communication unit (150) provides wireless data transfer between the unmanned aerial vehicle (100) and a user terminal (200).

The commands entered via a user interface (210) of the user terminal (200) are transmitted to the processor unit (1 10) via the communication unit (150). The processor unit (1 10), then, executes the entered commands and sends information about the location or inventory. The user terminal (200) may be a general or special purpose computer, mobile phone, tablet computer, etc.

More specifically, the processor unit can also determine the orientation of the unmanned aerial vehicle (100) relative to the marker from the image it captures.

In a possible embodiment of the invention, the unmanned aerial vehicle (100) may comprise a barcode reader (not shown in the figure) associated with the processor unit (350) for reading object markers.

The scope of the protection of the invention is set forth in the annexed claims and certainly cannot be limited to exemplary explanations in this detailed description. It is evident that a specialized one in the art can make similar configurations in the light of the explanations above without leaving the main theme of the invention.

REFERENCE NUMBERS IN THE FIGURE

100: Unmanned Aerial Vehicle

1 10: Processor unit

120: Flight control unit

130: Camera

131 : First camera

132: Second camera

133: Third camera

140: Memory unit

141 : Navigation module

142: Marker library

1421 : First marker dataset

1422: Second marker dataset

1423: Third marker dataset

1424: Object library 143: Setting map

150: Communication unit

200: User terminal

210: User interface

300: Marker

350: Object marker

400: Setting

401 : First axis

402: Second axis

403: Third axis

410: Floor

420: Sidewall

430: Ceiling

500: Storage device

510: Shelf

600: Object