2022-05-11 更新
Improved-Flow Warp Module for Remote Sensing Semantic Segmentation
Authors:Yinjie Zhang, Yi Liu, Wei Guo
Remote sensing semantic segmentation aims to assign automatically each pixel on aerial images with specific label. In this letter, we proposed a new module, called improved-flow warp module (IFWM), to adjust semantic feature maps across different scales for remote sensing semantic segmentation. The improved-flow warp module is applied along with the feature extraction process in the convolutional neural network. First, IFWM computes the offsets of pixels by a learnable way, which can alleviate the misalignment of the multi-scale features. Second, the offsets help with the low-resolution deep feature up-sampling process to improve the feature accordance, which boosts the accuracy of semantic segmentation. We validate our method on several remote sensing datasets, and the results prove the effectiveness of our method..
PDF
论文截图
Automatic Velocity Picking Using a Multi-Information Fusion Deep Semantic Segmentation Network
Authors:H. T. Wang, J. S. Zhang, Z. X. Zhao, C. X. Zhang, L. Li, Z. Y. Yang, W. F. Geng
Velocity picking, a critical step in seismic data processing, has been studied for decades. Although manual picking can produce accurate normal moveout (NMO) velocities from the velocity spectra of prestack gathers, it is time-consuming and becomes infeasible with the emergence of large amount of seismic data. Numerous automatic velocity picking methods have thus been developed. In recent years, deep learning (DL) methods have produced good results on the seismic data with medium and high signal-to-noise ratios (SNR). Unfortunately, it still lacks a picking method to automatically generate accurate velocities in the situations of low SNR. In this paper, we propose a multi-information fusion network (MIFN) to estimate stacking velocity from the fusion information of velocity spectra and stack gather segments (SGS). In particular, we transform the velocity picking problem into a semantic segmentation problem based on the velocity spectrum images. Meanwhile, the information provided by SGS is used as a prior in the network to assist segmentation. The experimental results on two field datasets show that the picking results of MIFN are stable and accurate for the scenarios with medium and high SNR, and it also performs well in low SNR scenarios.
PDF
论文截图
Point Cloud Semantic Segmentation using Multi Scale Sparse Convolution Neural Network
Authors:Yunzheng Su
Point clouds have the characteristics of disorder, unstructured and sparseness.Aiming at the problem of the non-structural nature of point clouds, thanks to the excellent performance of convolutional neural networks in image processing, one of the solutions is to extract features from point clouds based on two-dimensional convolutional neural networks. The three-dimensional information carried in the point cloud can be converted to two-dimensional, and then processed by a two-dimensional convolutional neural network, and finally back-projected to three-dimensional.In the process of projecting 3D information to 2D and back-projection, certain information loss will inevitably be caused to the point cloud and category inconsistency will be introduced in the back-projection stage;Another solution is the voxel-based point cloud segmentation method, which divides the point cloud into small grids one by one.However, the point cloud is sparse, and the direct use of 3D convolutional neural network inevitably wastes computing resources. In this paper, we propose a feature extraction module based on multi-scale ultra-sparse convolution and a feature selection module based on channel attention, and build a point cloud segmentation network framework based on this.By introducing multi-scale sparse convolution, network could capture richer feature information based on convolution kernels of different sizes, improving the segmentation result of point cloud segmentation.
PDF
论文截图
Semantics-Guided Moving Object Segmentation with 3D LiDAR
Authors:Shuo Gu, Suling Yao, Jian Yang, Hui Kong
Moving object segmentation (MOS) is a task to distinguish moving objects, e.g., moving vehicles and pedestrians, from the surrounding static environment. The segmentation accuracy of MOS can have an influence on odometry, map construction, and planning tasks. In this paper, we propose a semantics-guided convolutional neural network for moving object segmentation. The network takes sequential LiDAR range images as inputs. Instead of segmenting the moving objects directly, the network conducts single-scan-based semantic segmentation and multiple-scan-based moving object segmentation in turn. The semantic segmentation module provides semantic priors for the MOS module, where we propose an adjacent scan association (ASA) module to convert the semantic features of adjacent scans into the same coordinate system to fully exploit the cross-scan semantic features. Finally, by analyzing the difference between the transformed features, reliable MOS result can be obtained quickly. Experimental results on the SemanticKITTI MOS dataset proves the effectiveness of our work.
PDF This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible