Lidar is pure computer vision and is used for autonomous driving. Which one is better?

The machine vision route is dominated by cameras, with millimeter wave radar, ultrasonic radar, low-cost lidar, etc. The lidar route is dominated by lidar, with millimeter wave radar, ultrasonic sensors, cameras, etc.

Lidar and pure machine vision have always been two distinct directions in autonomous driving technology.

In practical applications, both have their own advantages and disadvantages in terms of data format, accuracy, and cost.

Basic principles of autonomous driving technology

Autonomous driving system can be divided into perception layer, decision-making layer, and execution layer.

The perception layer captures vehicle location information and external environment information through various hardware sensors.

The “brain” of the decision-making layer models the environment based on the information input by the perception layer, thereby forming an understanding of the overall situation and making decision-making judgments, and then issuing signal instructions for execution to the vehicle.

The final execution layer converts the signals of the decision-making layer into the actions of the car.

Both the lidar route and the pure machine vision route are a way for autonomous vehicles to perceive their environment, and the difference lies in the way they are implemented.

The machine vision route is dominated by cameras, with millimeter wave radar, ultrasonic radar, low-cost lidar, etc. The lidar route is dominated by lidar, with millimeter wave radar, ultrasonic sensors, cameras, etc.

The technical principle of lidar and camera

Lidar, whose working principle is to use laser for detection and ranging technology, is usually installed on the top of a car and can monitor 360 degrees. Inside the lidar, each group of components includes a transmitting unit and a receiving unit.

The principle of ranging is similar to that of laser measuring the distance between the earth and the moon, which is based on the time of laser emission + return. The laser diode emits pulsed light and reflects a part of the light back after irradiating the target. A photon detector will be installed near the diode to detect the returned signal, and then the distance of the target can be calculated by calculating the time difference.

After the pulse distance measurement system is activated, a large number of point clouds can be collected. If there is a target in it, it will show a shadow in the point cloud. A three-dimensional model of the surrounding environment can be generated through the point cloud. The higher the point cloud density, the clearer the image.

It can be considered that the two most important attributes of lidar are ranging and accuracy. Unlike cameras, the lidar solution is “active vision”-it can actively detect the surrounding environment, the intensity of the ambient light is not important, and it can work day and night. At the same time, thanks to the fact that the laser beams are more concentrated, it has higher detection accuracy than millimeter wave radar.

The working principle of the camera is similar to the human eye. The light reflected by the object is imaged on the sensor through the lens. Its disadvantage lies in its ability to measure distance and is greatly affected by ambient light. At the same time, one of its great advantages is that people can intuitively understand the content captured by the camera, and it will be very easy to classify objects through it, that is, visual recognition.

The Links:   AA084SC01 EPCS128SI16N

Author: Yoyokuo