Journal of Theoretical & Computational Science

Journal of Theoretical & Computational Science
Open Access

ISSN: 2376-130X

Commentary - (2023)Volume 9, Issue 1

Real-Time Navigation using Computing and Terrain Identification

Gaoxiang Wang*
 
*Correspondence: Gaoxiang Wang, Department of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China, Email:

Author info »

Description

For autonomous mobile robots to navigate and carry out operations like auto-patrol, automatic driving, and backcountry rescue in challenging, unstructured areas, terrain identification is a crucial essential function. Mobile robots can dynamically modify their initial planning trajectory for safer and more effective navigation by real-time detecting diverse terrain surfaces and perceiving terrain texture information. In order to enable real-time applications, a robot must be able to actively detect the terrain precisely, fast, and at cheap processing cost.

Convolutional neural network-based techniques have been widely proposed and researched for accurate terrain detection utilizing data from a single type of sensor or sensor fusion. These techniques can be divided into two primary types depending on various sensing modalities: exteroceptive-based and proprioceptive-based. The most common exteroceptive technique is vision-based, allowing robots to distinguish various terrains in advance at greater distances to avoid undesirable outcomes like sinking into the soft sand. A stereo-based approach was put out by researchers to accurately and efficiently detect barriers of varied sizes in all-terrain scenarios. Other initiatives have been undertaken to use LIDAR data for terrain categorization in order to support mobile robots' safe operation in difficult off-road areas. Additionally, they provided a selfsupervised method for identifying terrain surfaces for robot autonomous navigation in a forest using both LIDAR and visual sensing. The visual appearance of the terrains can be influenced by changes in light intensity at different times, various visibility conditions (such as snow, smoke, or fog), or various terrain coverings (such as water, dirt, fallen leaves, or snow), even though the aforementioned methods have produced quite good results.

Different approaches have been put forth to achieve highaccuracy terrain classification without the influence of the terrain's visual appearance by utilizing proprioceptive information produced by the mobile robot's dynamical contact with the terrain, such as the vibration, wheel-terrain audio, and tactile signals. This is intended to make up for the limitations of vision-based terrain classification. These techniques extract vibration feature signals to train neural networks for terrain classification by extracting features directly from the time domain, extracting features from the frequency domain, or extracting features from the power spectrum using the Fourier transform. Other statistical techniques that have shown promising results in a variety of settings are also used. However, robot body self-vibration, which frequently impairs the robot's perceptual acuity, might sometimes impact the proprioceptive characteristics gathered for terrain classification.

Although these active research fields have shown that each modality is useful in distinguishing various terrain types, the classification and ambiguity problems with the approaches outlined above still exist. Therefore, multi-sensor information fusion-based terrain categorization techniques have been researched. Exteroceptive-exteroceptive sensor fusion and exteroceptive-proprioceptive sensor fusion are the two basic categories into which these techniques fall. The performance of the self-learning framework, which was developed by researchers, may still be affected by light levels since it uses a radar classifier to assign training labels for the visual-based classification module. Terrain classification typically benefits from higher environmental robustness and performance by mixing exteroceptive modalities (such as pictures, lidar, and radar) and proprioceptive modalities (such as acceleration, audio). An audio feature was employed to self-label the visual landscape images for semantic segmentation in a self-supervised terrain classification framework that combined the visual and audio modalities, for instance.

The use of artificial whiskers for mobile robot navigation, object detection, obstacle shape recognition, and terrain surface information is an intriguing application of the proprioceptive sensor. This application is motivated by the way that animals use their whiskers to sense environmental information and navigate in the dark environments. Researchers demonstrated that a mobile robot might use static antennae as a local detector to overcome the limitations of optical approaches and assess the direction and location of abrupt impediments during swift mobility. Additionally, they suggested a method for recognizing obstacle shapes by employing a tactile whisker to gather contact torque data in order to interpret the object's 3D shape information.

However, the majority of the cutting-edge algorithms discussed above need a substantial amount of training data, which is frequently challenging to collect. Additionally, creating a training dataset requires hand labeling, which is challenging in complicated, uncharted situations. Additionally, these techniques require a lot of computer power to process the data, such as with Fourier transformation and online neural network training, and their classification accuracy tends to suffer when the training dataset lacks enough diversity. As a result, it is challenging for practical robotic applications to quickly and affordably identify and predict the surface characteristics of an unknown terrain in complex extreme environments, such as Mars, especially with IMU and vision methods that demand laborious raw data processing.

Author Info

Gaoxiang Wang*
 
Department of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
 

Citation: Wang G (2023) Real-Time Navigation using Computing and Terrain Identification. J Theor Comput Sci. 9:176.

Received: 01-Mar-2023, Manuscript No. JTCO-23-23293; Editor assigned: 03-Mar-2023, Pre QC No. JTCO-23-23293 (PQ); Reviewed: 17-Mar-2023, QC No. JTCO-23-23293; Revised: 24-Mar-2023, Manuscript No. JTCO-23-23293 (R); Published: 31-Mar-2023 , DOI: 10.35248/2376-130X.23.09.176

Copyright: © 2023 Wang G. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Top