Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020139391 - VEHICLE-BASED VIRTUAL STOP AND YIELD LINE DETECTION

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

What is claimed is:

1. A vehicle comprising:

a plurality of sensors, wherein a first sensor in the plurality of sensors is configured to generate velocity data, and wherein a second sensor in the plurality of sensors is configured to generate location data; and

a processor configured with computer-executable instructions, wherein the computer-executable instructions, when executed, cause the processor to:

detect a vehicle stop at a first time instant using the generated velocity data;

determine a location of the vehicle at the first time instant using the generated location data;

determine, using a deep neural network stored on the vehicle, a situation of the vehicle at the determined location;

determine, based on at least one of the determined situation or map data, that a cause of the detected vehicle stop is the vehicle arriving at the unmarked intersection;

generate virtual stop line data in response to determining that the cause of the detected vehicle stop is the vehicle arriving at an unmarked intersection; and

transmit the virtual stop line data to a server over a network via a communication array.

2. The vehicle of Claim 1 , wherein the location of the vehicle comprises geographical coordinates of the vehicle at the first time instant and a lane on a road in which the vehicle was positioned at the first time instant.

3. The vehicle of Claim 1 , wherein the computer-executable instructions, when executed, further cause the processor to:

generate a grid map;

apply the grid map as an input to the deep neural network; and

determine the situation of the vehicle based on an output of the deep neural network.

4. The vehicle of Claim 3, wherein the computer-executable instructions, when executed, further cause the processor to:

obtain the map data and at least one of light detection and ranging (LiDAR) data, radar data, or camera data; and

generate an image in which information derived from the map data is laid over information derived from at least one of the LiDAR data, the radar data, or the camera data to form the grid map.

5, The vehicle of Claim 1 , wherein the computer-executable instructions, when executed, further cause the processor to train the deep neural network using a training set of grid maps.

6. The vehicle of Claim 1 , wherein the computer-executable instructions, when executed, further cause the processor to:

defect a second vehicle stop at a second time instant before the first time instant; and

determine that a velocity of the vehicle increased by at least a velocity ripple value between the second time instant and the first time instant.

7. The vehicle of Claim 1 , wherein the situation of the vehicle is at least one of whether the vehicle is or is not at an intersection, whether another vehicle is or is not directly in front of the vehicle, whether an object other than the another vehicle is or is not directly in front of the vehicle, whether the vehicle Is or is not adjacent to a road marking, whether the vehicle is or is not in the process of turning, whether the vehicle is or is not in the process of changing lanes, whether a bus is or is not present in front of the vehicle and at a bus stop, whether a pedestrian is or is not present behind, in front of, or to the side of the vehicle, whether a bicyclist is or is not present behind, in front of, or to the side of the vehicle, or whether a road hazard is or is not present.

8. The vehicle of Claim 1 , wherein the vehicle is at least one of an autonomous vehicle, a vehicle that provides one or more driver-assist features, or a vehicle used to offer location-based services

9. A method implemented by a vehicle, the method comprising:

detecting a vehicle stop at a first time instant using velocity data measured by the vehicle;

determining a location of the vehicle at the first time instant;

determining, based in part on execution of an artificial intelligence engine running on the vehicle, that a cause of the detected vehicle stop is the vehicle arriving at an unmarked intersection;

generating virtual stop line data in response to determining that the cause of the detected vehicle stop is the vehicle arriving at the unmarked intersection; and transmitting the virtual stop line data to a server over a network.

10. The method of Claim 9, wherein the location of the vehicle comprises at least one of geographical coordinates of the vehicle at the first time instant or a lane on a road in which the vehicle was positioned at the first time instant.

11. The method of Claim 9, wherein determining that a cause of the detected vehicle stop is the vehicle arriving at an unmarked intersection further comprises:

generating a grid map;

applying the grid map as an input to the artificial intelligence engine;

determining a situation of the vehicle based on an output of the artificial intelligence engine; and

determining the cause based on at least one of the determined situation or map data.

12. The method of Claim 1 1 , wherein generating a grip map further comprises: obtaining map data and at least one of light detection and ranging (LiDAR)

data, radar data, or camera data; and

generating an image in which information derived from the map data is laid over information derived from at least one of the LiDAR data, the radar data, or the camera data to form the grid map.

13. The method of Claim 9, wherein the artificial intelligence engine is one of a deep neural network or a machine learning model.

14. The method of Claim 9, further comprising training the artificial intelligence engine using a training set of grid maps.

15. The method of Claim 9, wherein detecting a vehicle stop further comprises: detecting a second vehicle stop at a second time instant before the first time instant; and

determining that a velocity of the vehicle increased by at least a velocity ripple value between the second time instant and the first time instant.

16. The method of Claim 9, wherein the vehicle is at least one of an autonomous vehicle, a vehicle that provides one or more driver-assist features, or a vehicle used to offer location-based services.

17. Non-transitory, computer-readable storage media comprising computer executable instructions for identifying a virtual stop line, wherein the computer-executable instructions, when executed by a vehicle, cause the vehicle to:

detect a vehicle stop at a first time instant using velocity data measured by the vehicle;

determine a location of the vehicle at the first time instant;

determine, based in part on execution of an artificial intelligence engine running on the vehicle, that a cause of the detected vehicle stop is the vehicle arriving at an unmarked intersection;

generate virtual stop line data in response to determining that the cause of

the detected vehicle stop is the vehicle arriving at the unmarked intersection; and transmit the virtual stop line data external to the vehicle.

18. The non-transitory, computer-readable storage media of Claim 17, wherein the location of the vehicle comprises at least one of geographical coordinates of the vehicle at the first time instant or a lane on a road in which the vehicle was positioned at the first time instant.

19, The non-transitory, computer-readable storage media of Claim 17, wherein the computer-executable instructions, when executed, further cause the vehicle to:

generate a grid map;

apply the grid map as an input to the artificial intelligence engine; determine a situation of the vehicle based on an output of the artificial intelligence engine; and

determine the cause based on at least one of the determined situation or map data.

20, The non-transitory, computer-readable storage media of Claim 17, wherein the artificial intelligence engine is one of a deep neural network or a machine learning model.