Logo image
PDS-Net: A novel point and depth-wise separable convolution for real-time object detection
Journal article   Peer reviewed

PDS-Net: A novel point and depth-wise separable convolution for real-time object detection

Masum Shah Junayed, Md Baharul Islam, Hassan Imani and Tarkan Aydin
International journal of multimedia information retrieval, Vol.11(2), pp.171-188
06-01-2022

Abstract

Computer Science, Artificial Intelligence Computer Science, Software Engineering Science & Technology Computer Science Technology
Numerous recent object detectors and classifiers have shown acceptable performance in recent years by using convolutional neural networks and other efficient architectures. However, most of them continue to encounter difficulties like overfitting, increased computational costs, and low efficiency and performance in real-time scenarios. This paper proposes a new lightweight model for detecting and classifying objects in images. This model presents a backbone for extracting in-depth features and a spatial feature pyramid network (SFPN) for accurately detecting and categorizing objects. The proposed backbone uses point-wise separable (PWS) and depth-wise separable convolutions, which are more efficient than standard convolution. The PWS convolution utilizes a residual shortcut link to reduce computation time. We also propose a SFPN that comprises concatenation, transformer encoder-decoder, and feature fusion modules, which enables the simultaneous processing of multi-scale features, the extraction of low-level characteristics, and the creation of a pyramid of features to increase the effectiveness of the proposed model. The proposed model outperforms all of the existing backbones for object detection and classification in three publicly accessible datasets: PASCAL VOC 2007, PASCAL VOC 2012, and MS-COCO. Our extensive experiments show that the proposed model outperforms state-of-the-art detectors, with mAP improvements of 2.4% and 2.5% on VOC 2007, 3.0% and 2.6% on VOC 2012, and 2.5% and 3.6% on MS-COCO in the small and large sizes of the images, respectively. In the MS-COCO dataset, our model achieves FPS of 39.4 and 33.1 in a single GPU for the small (320 x 320) and large (512 x 512) sizes of the images, respectively, which shows that our method can run in real-time.
url
Link to published article.View

Related links

Metrics

Details

Logo image