With the continuous improvement of sensing technology, intelligent technology and computing technology, intelligent mobile robots must be able to play the role of human in production. So what are the mobile robot positioning technologies mainly involved? After summarizing, mobile robots currently mainly have these 5 positioning technologies.

ー1 ー

Mobile robot ultrasonic navigation and positioning technology

The working principle of ultrasonic navigation and positioning is also similar to that of laser and infrared. Usually, the transmitting probe of the ultrasonic sensor emits ultrasonic waves. The ultrasonic waves encounter obstacles in the medium and return to the receiving device.

By receiving the ultrasonic reflection signal emitted by itself, and calculating the propagation distance S according to the time difference and propagation speed of the ultrasonic emission and echo reception, the distance from the obstacle to the robot can be obtained, that is, the formula: S=Tv/2 where T —The time difference between ultrasonic transmission and reception; v—wave speed of ultrasonic propagation in the medium.

Of course, there are also many mobile robot navigation and positioning technologies that use separate transmitter and receiver devices. Multiple receiver devices are arranged on the environment map, and the transmitter probe is installed on the mobile robot.

In the navigation and positioning of mobile robots, due to the defects of the ultrasonic sensor itself, such as specular reflection, limited beam angle, etc., it is difficult to fully obtain the surrounding environment information. Therefore, the ultrasonic sensing system composed of multiple sensors is usually used to establish the corresponding environment model transmits the information collected by the sensor to the control system of the mobile robot through serial communication. The control system then adopts certain algorithms to process the corresponding data according to the collected signals and the established mathematical model to obtain the position environment of the robot.

Because ultrasonic sensors have the advantages of low cost, fast information collection rate, and high range resolution, they have been widely used in the navigation and positioning of mobile robots for a long time. Moreover, it does not require complex image equipment technology when collecting environmental information, so the ranging speed is fast and the real-time performance is good.

At the same time, ultrasonic sensors are not easily affected by external environmental conditions such as weather conditions, ambient light, shadows of obstacles, and surface roughness. Ultrasonic navigation and positioning have been widely used in the perception systems of various mobile robots.

ー2 ー

In the visual navigation and positioning system, the most widely used at home and abroad is the navigation method of installing on-board cameras in robots based on local vision. In this navigation method, control equipment and sensing devices are mounted on the robot body, and high-level decisions such as image recognition and path planning are completed by the on-board control computer.

The visual navigation and positioning system mainly includes: cameras (or CCD image sensors), video signal digitization equipment, DSP-based fast signal processors, computers and their peripherals, etc. Many robot systems now use CCD image sensors. The basic element is a line of silicon imaging elements. A photosensitive element and a charge transfer device are arranged on a substrate. Through the sequential transfer of charges, the video signals of multiple pixels are time-shared and sequentially Take it out, for example, the resolution of the image collected by the area CCD sensor can range from 32×32 to 1024×1024 pixels.

The working principle of the visual navigation positioning system is simply to optically process the environment around the robot. First use the camera to collect image information, compress the collected information, and then feed it back to a The learning subsystem composed of neural network and statistical methods, and then the learning subsystem links the collected image information with the actual position of the robot to complete the autonomous navigation and positioning function of the robot.

ー3 ー

GPS Global Positioning System

Nowadays, in the application of intelligent robot navigation and positioning technology, the pseudo-range differential dynamic positioning method is generally used. The reference receiver and the dynamic receiver are used to observe 4 GPS satellites. According to a certain algorithm, the robot’s position at a certain moment can be obtained. Three-dimensional position coordinates. Differential dynamic positioning eliminates star clock errors. For users who are at a distance of 1000 km from the reference station, it can eliminate star clock errors and errors caused by the troposphere, which can significantly improve the accuracy of dynamic positioning.

However, in mobile navigation, the positioning accuracy of mobile GPS receivers is affected by satellite signal conditions and road environment, as well as clock errors, propagation errors, receiver noise and many other factors, so , The simple use of GPS navigation has the problems of low positioning accuracy and low reliability, so the navigation application of robots is usually supplemented with magnetic compass, optical code disc and GPS data for navigation. In addition, GPS navigation systems are not suitable for use in indoor or underwater robot navigation and robot systems that require high position accuracy.

ー4 ー

Light reflection navigation and positioning technology for mobile robots

The typical light reflection navigation positioning method mainly uses laser or infrared sensors to measure distance. Both laser and infrared use light reflection technology for navigation and positioning.

The laser global positioning system is generally composed of a laser rotating mechanism, a mirror, a photoelectric receiving device, and a data acquisition and transmission device.

The laser is emitted through the rotating mirror mechanism. When the cooperative road sign composed of the retroreflector is scanned, the reflected light is processed by the photoelectric receiving device as a detection signal, and the data acquisition program is started to read Take the code disk data of the rotating mechanism (the measured angle value of the target), and then transmit it to the host computer for data processing through communication. According to the known position of the road sign and the detected information, the current sensor in the road sign coordinate system can be calculated Position and direction to achieve the purpose of further navigation and positioning.

Laser ranging has the advantages of narrow beam, good parallelism, small scattering, and high ranging direction resolution. At the same time, it is also greatly affected by environmental factors. Therefore, how to denoise the collected signals when using laser ranging is also an important issue. It is a big problem. In addition, there are blind spots in laser ranging, so it is difficult to realize navigation and positioning by laser alone. In industrial applications, it is generally used in industrial field detection within a specific range, such as detecting pipeline cracks. .

Infrared sensing technology is often used in the multi-joint robot obstacle avoidance system to form a large-area robot “sensitive skin”, covering the surface of the robot arm, which can detect when the robot arm is running. Various objects.

A typical infrared sensor includes a solid-state light-emitting diode that emits infrared light and a solid-state photodiode that acts as a receiver. The infrared light-emitting tube emits the modulated signal, and the infrared photosensitive tube receives the infrared modulated signal reflected by the target. The elimination of ambient infrared light interference is guaranteed by signal modulation and special infrared filters. Assuming that the output signal Vo represents the voltage output of the reflected light intensity, Vo is a function of the distance between the probe and the workpiece: Vo = f (x, p) where p-the reflection coefficient of the workpiece. p is related to the color and roughness of the target surface. x—The distance between the probe and the workpiece.

When the workpiece is a similar target with the same p value, x and Vo correspond one-to-one. x can be obtained by interpolating the experimental data of the proximity measurement of various targets. In this way, the position of the robot from the target object can be measured by the infrared sensor, and then the mobile robot can be navigated and positioned by other information processing methods.

Although infrared sensor positioning also has the advantages of high sensitivity, simple structure and low cost, because of their high angular resolution and low distance resolution, they are often used as proximity sensors in mobile robots to detect approaching or sudden movements. Obstacles to facilitate emergency stop of the robot.

ー5 ー

SLAM technology

Most of the leading service robot companies in the industry have adopted SLAM technology. Only (SLAMTEC) SLAM Technology has an exclusive advantage in SLAM technology, what exactly is SLAM technology? Simply put, SLAM technology refers to the complete process of positioning, mapping, and path planning for robots in an unknown environment.

SLAM (Simultaneous Localization and Mapping, real-time localization and map construction) of robots are described in detail . Since they were proposed in 1988, they have been mainly used to study the intelligentization of robot movement. For a completely unknown indoor environment, equipped with core sensors such as lidar, SLAM technology can help the robot build an indoor environment map and help the robot walk autonomously.

The SLAM problem can be described as: the robot starts to move from an unknown position in an unknown environment, locates itself according to the position estimation and sensor data during the movement, and builds an incremental map at the same time.

SLAM technology implementation methods mainly include VSLAM, Wifi-SLAM and Lidar SLAM.

1. VSLAM (Visual SLAM)

Refers to the use of depth cameras such as cameras and Kinect for navigation and exploration in an indoor environment. The working principle is simply to optically process the environment around the robot. First use the camera to collect image information, compress the collected information, and then feed it back to a learning subsystem composed of neural networks and statistical methods. Then the learning subsystem connects the collected image information with the actual position of the robot to complete the autonomous navigation and positioning function of the robot.

However, the indoor VSLAM is still in the research stage and is far from practical application. On the one hand, the amount of calculation is too large, and the performance requirements of the robot system are high; on the other hand, the maps generated by VSLAM (mostly point clouds) cannot be used for robot path planning, and further exploration and research are needed.

Wifi-SLAM Refers to the use of multiple sensing devices in smart phones for positioning, including Wifi, GPS, gyroscope, accelerometer, and magnetometer, and drawing accurate indoor maps from the data obtained through algorithms such as machine learning and pattern recognition. The provider of this technology was acquired by Apple in 2013. Whether Apple has applied the Wifi-SLAM technology to the iPhone, so that all iPhone users are equivalent to carrying a small drawing robot, it is not yet known. Undoubtedly, more precise positioning is not only beneficial to the map, it will make all applications that rely on geographic location (LBS) more accurate.

3. Lidar SLAM

Refers to the use of lidar as a sensor to obtain map data so that the robot can realize synchronous positioning and map construction. As far as the technology itself is concerned, after years of verification, it is quite mature, but the bottleneck problem of the high cost of Lidar needs to be solved urgently.

Google driverless cars are using this technology. The lidar installed on the roof comes from Velodyne in the United States, and the price is as high as $70,000. This lidar can emit 64 laser beams to the surroundings when rotating at high speed. When the laser hits and returns to surrounding objects, the distance between the car body and surrounding objects can be calculated. The computer system then draws a fine 3D topographic map based on these data, and then combines it with the high-resolution map to generate different data models for use by the on-board computer system. Lidar accounts for half of the cost of the entire vehicle, which may be one of the reasons why Google’s unmanned vehicles have been unable to mass-produce.

Lidar has the characteristics of strong directivity, which effectively guarantees the accuracy of navigation and can well adapt to the indoor environment. However, Lidar SLAM has not performed well in the field of robot indoor navigation because the price of Lidar is too expensive.

Leave a Reply