检测/分割/跟踪


2022-08-26 更新

Comparison of Object Detection Algorithms for Street-level Objects

Authors:Martinus Grady Naftali, Jason Sebastian Sulistyawan, Kelvin Julian

Object detection for street-level objects can be applied to various use cases, from car and traffic detection to the self-driving car system. Therefore, finding the best object detection algorithm is essential to apply it effectively. Many object detection algorithms have been released, and many have compared object detection algorithms, but few have compared the latest algorithms, such as YOLOv5, primarily which focus on street-level objects. This paper compares various one-stage detector algorithms; SSD MobileNetv2 FPN-lite 320x320, YOLOv3, YOLOv4, YOLOv5l, and YOLOv5s for street-level object detection within real-time images. The experiment utilizes a modified Udacity Self Driving Car Dataset with 3,169 images. Dataset is split into train, validation, and test; Then, it is preprocessed and augmented using rescaling, hue shifting, and noise. Each algorithm is then trained and evaluated. Based on the experiments, the algorithms have produced decent results according to the inference time and the values of their precision, recall, F1-Score, and Mean Average Precision (mAP). The results also shows that YOLOv5l outperforms the other algorithms in terms of accuracy with a mAP@.5 of 0.593, MobileNetv2 FPN-lite has the fastest inference time among the others with only 3.20ms inference time. It is also found that YOLOv5s is the most efficient, with it having a YOLOv5l accuracy and a speed almost as quick as the MobileNetv2 FPN-lite. This shows that various algorithm are suitable for street-level object detection and viable enough to be used in self-driving car.
PDF 11 pages, 9 figures, 5 tables

点此查看论文截图

Depth-Assisted ResiDualGAN for Cross-Domain Aerial Images Semantic Segmentation

Authors:Yang Zhao, Peng Guo, Han Gao, Xiuwan Chen

Unsupervised domain adaptation (UDA) is an approach to minimizing domain gap. Generative methods are common approaches to minimizing the domain gap of aerial images which improves the performance of the downstream tasks, e.g., cross-domain semantic segmentation. For aerial images, the digital surface model (DSM) is usually available in both the source domain and the target domain. Depth information in DSM brings external information to generative models. However, little research utilizes it. In this paper, depth-assisted ResiDualGAN (DRDG) is proposed where depth supervised loss (DSL), and depth cycle consistency loss (DCCL) are used to bring depth information into the generative model. Experimental results show that DRDG reaches state-of-the-art accuracy between generative methods in cross-domain semantic segmentation tasks.
PDF

点此查看论文截图

Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation

Authors:Jianlong Yuan, Jinchao Ge, Qi Qian, Zhibin Wang, Fan Wang, Yifan Liu

Consistency regularization has been widely studied in recent semi-supervised semantic segmentation methods. Remarkable performance has been achieved, benefiting from image, feature, and network perturbations. To make full use of these perturbations, in this work, we propose a new consistency regularization framework called mutual knowledge distillation (MKD). We innovatively introduce two auxiliary mean-teacher models based on the consistency regularization method. More specifically, we use the pseudo label generated by one mean teacher to supervise the other student network to achieve a mutual knowledge distillation between two branches. In addition to using image-level strong and weak augmentation, we also employ feature augmentation considering implicit semantic distributions to add further perturbations to the students. The proposed framework significantly increases the diversity of the training samples. Extensive experiments on public benchmarks show that our framework outperforms previous state-of-the-art(SOTA) methods under various semi-supervised settings.
PDF

点此查看论文截图

KD-SCFNet: Towards More Accurate and Efficient Salient Object Detection via Knowledge Distillation

Authors:Jin Zhang, Qiuwei Liang, Yanjiao Shi

Most existing salient object detection (SOD) models are difficult to apply due to the complex and huge model structures. Although some lightweight models are proposed, the accuracy is barely satisfactory. In this paper, we design a novel semantics-guided contextual fusion network (SCFNet) that focuses on the interactive fusion of multi-level features for accurate and efficient salient object detection. Furthermore, we apply knowledge distillation to SOD task and provide a sizeable dataset KD-SOD80K. In detail, we transfer the rich knowledge from a seasoned teacher to the untrained SCFNet through unlabeled images, enabling SCFNet to learn a strong generalization ability to detect salient objects more accurately. The knowledge distillation based SCFNet (KDSCFNet) achieves comparable accuracy to the state-of-the-art heavyweight methods with less than 1M parameters and 174 FPS real-time detection speed. Extensive experiments demonstrate the robustness and effectiveness of the proposed distillation method and SOD framework. Code and data: https://github.com/zhangjinCV/KD-SCFNet.
PDF The picture is wrong and needs to be withdrawn, not revised

点此查看论文截图

Threshold-adaptive Unsupervised Focal Loss for Domain Adaptation of Semantic Segmentation

Authors:Weihao Yan, Yeqiang Qian, Chunxiang Wang, Ming Yang

Semantic segmentation is an important task for intelligent vehicles to understand the environment. Current deep learning methods require large amounts of labeled data for training. Manual annotation is expensive, while simulators can provide accurate annotations. However, the performance of the semantic segmentation model trained with the data of the simulator will significantly decrease when applied in the actual scene. Unsupervised domain adaptation (UDA) for semantic segmentation has recently gained increasing research attention, aiming to reduce the domain gap and improve the performance on the target domain. In this paper, we propose a novel two-stage entropy-based UDA method for semantic segmentation. In stage one, we design a threshold-adaptative unsupervised focal loss to regularize the prediction in the target domain, which has a mild gradient neutralization mechanism and mitigates the problem that hard samples are barely optimized in entropy-based methods. In stage two, we introduce a data augmentation method named cross-domain image mixing (CIM) to bridge the semantic knowledge from two domains. Our method achieves state-of-the-art 58.4% and 59.6% mIoUs on SYNTHIA-to-Cityscapes and GTA5-to-Cityscapes using DeepLabV2 and competitive performance using the lightweight BiSeNet.
PDF 10 pages, 8 figure, 7 tables, submitted to T-ITS on April 2, 2022

点此查看论文截图

WEDGE: Web-Image Assisted Domain Generalization for Semantic Segmentation

Authors:Namyup Kim, Taeyoung Son, Cuiling Lan, Wenjun Zeng, Suha Kwak

Domain generalization for semantic segmentation is highly demanded in real applications, where a trained model is expected to work well in previously unseen domains. One challenge lies in the lack of data which could cover the diverse distributions of the possible unseen domains for training. In this paper, we propose a WEb-image assisted Domain GEneralization (WEDGE) scheme, which is the first to exploit the diversity of web-crawled images for generalizable semantic segmentation. To explore and exploit the real-world data distributions, we collect a web-crawled dataset which presents large diversity in terms of weather conditions, sites, lighting, camera styles, etc. We also present a method which injects the style representation of the web-crawled data into the source domain on-the-fly during training, which enables the network to experience images of diverse styles with reliable labels for effective training. Moreover, we use the web-crawled dataset with predicted pseudo labels for training to further enhance the capability of the network. Extensive experiments demonstrate that our method clearly outperforms existing domain generalization techniques.
PDF

点此查看论文截图

Object Detection in Aerial Images with Uncertainty-Aware Graph Network

Authors:Jongha Kim, Jinheon Baek, Sung Ju Hwang

In this work, we propose a novel uncertainty-aware object detection framework with a structured-graph, where nodes and edges are denoted by objects and their spatial-semantic similarities, respectively. Specifically, we aim to consider relationships among objects for effectively contextualizing them. To achieve this, we first detect objects and then measure their semantic and spatial distances to construct an object graph, which is then represented by a graph neural network (GNN) for refining visual CNN features for objects. However, refining CNN features and detection results of every object are inefficient and may not be necessary, as that include correct predictions with low uncertainties. Therefore, we propose to handle uncertain objects by not only transferring the representation from certain objects (sources) to uncertain objects (targets) over the directed graph, but also improving CNN features only on objects regarded as uncertain with their representational outputs from the GNN. Furthermore, we calculate a training loss by giving larger weights on uncertain objects, to concentrate on improving uncertain object predictions while maintaining high performances on certain objects. We refer to our model as Uncertainty-Aware Graph network for object DETection (UAGDet). We then experimentally validate ours on the challenging large-scale aerial image dataset, namely DOTA, that consists of lots of objects with small to large sizes in an image, on which ours improves the performance of the existing object detection network.
PDF ECCV Workshop 2022

点此查看论文截图

MonoPCNS: Monocular 3D Object Detection via Point Cloud Network Simulation

Authors:Han Sun, Zhaoxin Fan, Zhenbo Song, Zhicheng Wang, Kejian Wu, Jianfeng Lu

Monocular 3D object detection is a fundamental but very important task to many applications including autonomous driving, robotic grasping and augmented reality. Existing leading methods tend to estimate the depth of the input image first, and detect the 3D object based on point cloud. This routine suffers from the inherent gap between depth estimation and object detection. Besides, the prediction error accumulation would also affect the performance. In this paper, a novel method named MonoPCNS is proposed. The insight behind introducing MonoPCNS is that we propose to simulate the feature learning behavior of a point cloud based detector for monocular detector during the training period. Hence, during inference period, the learned features and prediction would be similar to the point cloud based detector as possible. To achieve it, we propose one scene-level simulation module, one RoI-level simulation module and one response-level simulation module, which are progressively used for the detector’s full feature learning and prediction pipeline. We apply our method to the famous M3D-RPN detector and CaDDN detector, conducting extensive experiments on KITTI and Waymo Open dataset. Results show that our method consistently improves the performance of different monocular detectors for a large margin without changing their network architectures. Our method finally achieves state-of-the-art performance.
PDF

点此查看论文截图

Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation

Authors:Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi

In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version. Intriguingly, we observe that such a simple pipeline already achieves competitive results against recent advanced works, when transferred to our segmentation scenario. Its success heavily relies on the manual design of strong data augmentations, however, which may be limited and inadequate to explore a broader perturbation space. Motivated by this, we propose an auxiliary feature perturbation stream as a supplement, leading to an expanded perturbation space. On the other, to sufficiently probe original image-level augmentations, we present a dual-stream perturbation technique, enabling two strong views to be simultaneously guided by a common weak view. Consequently, our overall Unified Dual-Stream Perturbations approach (UniMatch) surpasses all existing methods significantly across all evaluation protocols on the Pascal, Cityscapes, and COCO benchmarks. We also demonstrate the superiority of our method in remote sensing interpretation and medical image analysis. Code is available at https://github.com/LiheYoung/UniMatch.
PDF 18 pages, 18 tables

点此查看论文截图

A Simple Baseline for Multi-Camera 3D Object Detection

Authors:Yunpeng Zhang, Wenzhao Zheng, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu

3D object detection with surrounding cameras has been a promising direction for autonomous driving. In this paper, we present SimMOD, a Simple baseline for Multi-camera Object Detection, to solve the problem. To incorporate multi-view information as well as build upon previous efforts on monocular 3D object detection, the framework is built on sample-wise object proposals and designed to work in a two-stage manner. First, we extract multi-scale features and generate the perspective object proposals on each monocular image. Second, the multi-view proposals are aggregated and then iteratively refined with multi-view and multi-scale visual features in the DETR3D-style. The refined proposals are end-to-end decoded into the detection results. To further boost the performance, we incorporate the auxiliary branches alongside the proposal generation to enhance the feature learning. Also, we design the methods of target filtering and teacher forcing to promote the consistency of two-stage training. We conduct extensive experiments on the 3D object detection benchmark of nuScenes to demonstrate the effectiveness of SimMOD and achieve new state-of-the-art performance. Code will be available at https://github.com/zhangyp15/SimMOD.
PDF

点此查看论文截图

Bridging the View Disparity of Radar and Camera Features for Multi-modal Fusion 3D Object Detection

Authors:Taohua Zhou, Yining Shi, Junjie Chen, Kun Jiang, Mengmeng Yang, Diange Yang

Environmental perception with multi-modal fusion of radar and camera is crucial in autonomous driving to increase the accuracy, completeness, and robustness. This paper focuses on how to utilize millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection. A novel method which realizes the feature-level fusion under bird-eye view (BEV) for a better feature representation is proposed. Firstly, radar features are augmented with temporal accumulation and sent to a temporal-spatial encoder for radar feature extraction. Meanwhile, multi-scale image 2D features which adapt to various spatial scales are obtained by image backbone and neck model. Then, image features are transformed to BEV with the designed view transformer. In addition, this work fuses the multi-modal features with a two-stage fusion model called point fusion and ROI fusion, respectively. Finally, a detection head regresses objects category and 3D locations. Experimental results demonstrate that the proposed method realizes the state-of-the-art performance under the most important detection metrics, mean average precision (mAP) and nuScenes detection score (NDS) on the challenging nuScenes dataset.
PDF 11 pages,6 figures

点此查看论文截图

Nuclei instance segmentation and classification in histopathology images with StarDist

Authors:Martin Weigert, Uwe Schmidt

Instance segmentation and classification of nuclei is an important task in computational pathology. We show that StarDist, a deep learning nuclei segmentation method originally developed for fluorescence microscopy, can be extended and successfully applied to histopathology images. This is substantiated by conducting experiments on the Lizard dataset, and through entering the Colon Nuclei Identification and Counting (CoNIC) challenge 2022, where our approach achieved the first spot on the leaderboard for the segmentation and classification task for both the preliminary and final test phase.
PDF

点此查看论文截图

YOLOV: Making Still Image Object Detectors Great at Video Object Detection

Authors:Yuheng Shi, Naiyan Wang, Xiaojie Guo

Video object detection (VID) is challenging because of the high variation of object appearance as well as the diverse deterioration in some frames. On the positive side, the detection in a certain frame of a video, compared with in a still image, can draw support from other frames. Hence, how to aggregate features across different frames is pivotal to the VID problem. Most of existing aggregation algorithms are customized for two-stage detectors. But, the detectors in this category are usually computationally expensive due to the two-stage nature. This work proposes a simple yet effective strategy to address the above concerns, which spends marginal overheads with significant gains in accuracy. Concretely, different from the traditional two-stage pipeline, we advocate putting the region-level selection after the one-stage detection to avoid processing massive low-quality candidates. Besides, a novel module is constructed to evaluate the relationship between a target frame and its reference ones, and guide the aggregation. Extensive experiments and ablation studies are conducted to verify the efficacy of our design, and reveal its superiority over other state-of-the-art VID approaches in both effectiveness and efficiency. Our YOLOX-based model can achieve promising performance (e.g., 87.5\% AP50 at over 30 FPS on the ImageNet VID dataset on a single 2080Ti GPU), making it attractive for large-scale or real-time applications. The implementation is simple, the demo code and models have been made available at https://github.com/YuHengsss/YOLOV .
PDF

点此查看论文截图

A Survey of Self-Supervised and Few-Shot Object Detection

Authors:Gabriel Huang, Issam Laradji, David Vazquez, Simon Lacoste-Julien, Pau Rodriguez

Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which require dense labeling of the image. While few-shot object detection is about training a model on novel (unseen) object classes with little data, it still requires prior training on many labeled examples of base (seen) classes. On the other hand, self-supervised methods aim at learning representations from unlabeled data which transfer well to downstream tasks such as object detection. Combining few-shot and self-supervised object detection is a promising research direction. In this survey, we review and characterize the most recent approaches on few-shot and self-supervised object detection. Then, we give our main takeaways and discuss future research directions. Project page at https://gabrielhuang.github.io/fsod-survey/
PDF To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence. Awesome Few-Shot Object Detection (Leaderboard) at https://github.com/gabrielhuang/awesome-few-shot-object-detection

点此查看论文截图

Single-Stage Open-world Instance Segmentation with Cross-task Consistency Regularization

Authors:Xizhe Xue, Dongdong Yu, Lingqiao Liu, Yu Liu, Ying Li, Zehuan Yuan, Ping Song, Mike Zheng Shou

Open-world instance segmentation (OWIS) aims to segment class-agnostic instances from images, which has a wide range of real-world applications such as autonomous driving. Most existing approaches follow a two-stage pipeline: performing class-agnostic detection first and then class-specific mask segmentation. In contrast, this paper proposes a single-stage framework to produce a mask for each instance directly. Also, instance mask annotations could be noisy in the existing datasets; to overcome this issue, we introduce a new regularization loss. Specifically, we first train an extra branch to perform an auxiliary task of predicting foreground regions (i.e. regions belonging to any object instance), and then encourage the prediction from the auxiliary branch to be consistent with the predictions of the instance masks. The key insight is that such a cross-task consistency loss could act as an error-correcting mechanism to combat the errors in annotations. Further, we discover that the proposed cross-task consistency loss can be applied to images without any annotation, lending itself to a semi-supervised learning method. Through extensive experiments, we demonstrate that the proposed method can achieve impressive results in both fully-supervised and semi-supervised settings. Compared to SOTA methods, the proposed method significantly improves the $AP_{100}$ score by 4.75\% in UVO$\rightarrow$UVO setting and 4.05\% in COCO$\rightarrow$UVO setting. In the case of semi-supervised learning, our model learned with only 30\% labeled data, even outperforms its fully-supervised counterpart with 50\% labeled data. The code will be released soon.
PDF

点此查看论文截图

StructToken : Rethinking Semantic Segmentation with Structural Prior

Authors:Fangjian Lin, Zhanhao Liang, Sitong Wu, Junjun He, Kai Chen, Shengwei Tian

In previous deep-learning-based methods, semantic segmentation has been regarded as a static or dynamic per-pixel classification task, \textit{i.e.,} classify each pixel representation to a specific category. However, these methods only focus on learning better pixel representations or classification kernels while ignoring the structural information of objects, which is critical to human decision-making mechanism. In this paper, we present a new paradigm for semantic segmentation, named structure-aware extraction. Specifically, it generates the segmentation results via the interactions between a set of learnable structure tokens and the image feature, which aims to progressively extract the structural information of each category from the feature. Extensive experiments show that our StructToken outperforms the state-of-the-art on three widely-used benchmarks, including ADE20K, Cityscapes, and COCO-Stuff-10K.
PDF

点此查看论文截图

GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data

Authors:Hongjae Lee, Changwoo Han, Seung-Won Jung

Semantic segmentation for autonomous driving should be robust against various in-the-wild environments. Nighttime semantic segmentation is especially challenging due to a lack of annotated nighttime images and a large domain gap from daytime images with sufficient annotation. In this paper, we propose a novel GPS-based training framework for nighttime semantic segmentation. Given GPS-aligned pairs of daytime and nighttime images, we perform cross-domain correspondence matching to obtain pixel-level pseudo supervision. Moreover, we conduct flow estimation between daytime video frames and apply GPS-based scaling to acquire another pixel-level pseudo supervision. Using these pseudo supervisions with a confidence map, we train a nighttime semantic segmentation network without any annotation from nighttime images. Experimental results demonstrate the effectiveness of the proposed method on several nighttime semantic segmentation datasets. Our source code is available at https://github.com/jimmy9704/GPS-GLASS.
PDF

点此查看论文截图

Generalised Co-Salient Object Detection

Authors:Jiawei Liu, Jing Zhang, Kaihao Zhang, Nick Barnes

Conventional co-salient object detection (CoSOD) has a strong assumption that \enquote{a common salient object exists in every image of the same group}. However, the biased assumption contradicts real scenarios where co-salient objects could be partially or completely absent in a group of images. We propose a random sampling based Generalised CoSOD Training (GCT) strategy to distill the awareness of inter-image absence of co-salient object(s) into CoSOD models. In addition, the random sampling process inherent in GCT enables the generation of a high-quality uncertainty map, with which we can further remediate less confident model predictions that are prone to localising non-common salient objects. To evaluate the generalisation ability of CoSOD models, we propose two new testing datasets, namely CoCA-Common and CoCA-Zero, where a common salient object is partially present in the former and completely absent in the latter. Extensive experiments demonstrate that our proposed method significantly improves the generalisation ability of CoSOD models on the two new datasets, while not negatively impacting its performance under the conventional CoSOD setting. Codes are available at https://github.com/Carlisle-Liu/GCoSOD.
PDF

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录