0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Advancing semantic segmentation: Enhanced UNet algorithm with attention mechanism and deformable convolution

      research-article
      , , * ,
      PLOS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper presents a novel method for improving semantic segmentation performance in computer vision tasks. Our approach utilizes an enhanced UNet architecture that leverages an improved ResNet50 backbone. We replace the last layer of ResNet50 with deformable convolution to enhance feature representation. Additionally, we incorporate an attention mechanism, specifically ECA-ASPP (Attention Spatial Pyramid Pooling), in the encoding path of UNet to capture multi-scale contextual information effectively. In the decoding path of UNet, we explore the use of attention mechanisms after concatenating low-level features with high-level features. Specifically, we investigate two types of attention mechanisms: ECA (Efficient Channel Attention) and LKA (Large Kernel Attention). Our experiments demonstrate that incorporating attention after concatenation improves segmentation accuracy. Furthermore, we compare the performance of ECA and LKA modules in the decoder path. The results indicate that the LKA module outperforms the ECA module. This finding highlights the importance of exploring different attention mechanisms and their impact on segmentation performance. To evaluate the effectiveness of the proposed method, we conduct experiments on benchmark datasets, including Stanford and Cityscapes, as well as the newly introduced WildPASS and DensPASS datasets. Based on our experiments, the proposed method achieved state-of-the-art results including mIoU 85.79 and 82.25 for the Stanford dataset, and the Cityscapes dataset, respectively. The results demonstrate that our proposed method performs well on these datasets, achieving state-of-the-art results with high segmentation accuracy.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: found
          • Article: not found

          DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

          In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

            We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              A survey on Image Data Augmentation for Deep Learning

                Bookmark

                Author and article information

                Contributors
                Role: ConceptualizationRole: Data curationRole: Formal analysisRole: InvestigationRole: MethodologyRole: Project administrationRole: ResourcesRole: SoftwareRole: ValidationRole: Writing – original draftRole: Writing – review & editing
                Role: SupervisionRole: Writing – review & editing
                Role: MethodologyRole: SupervisionRole: ValidationRole: Writing – review & editing
                Role: Editor
                Journal
                PLoS One
                PLoS One
                plos
                PLOS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                16 January 2025
                2025
                : 20
                : 1
                : e0305561
                Affiliations
                [001] Department of Electrical and Computer Engineering, University of Birjand, Birjand, Iran
                Institut de Robotica i Informatica Industrial, SPAIN
                Author notes

                Competing Interests: NO authors have competing interests Enter: The authors have declared that no competing interests exist.

                Author information
                https://orcid.org/0009-0005-3168-4155
                https://orcid.org/0000-0002-9096-8626
                Article
                PONE-D-24-04945
                10.1371/journal.pone.0305561
                11737789
                39820812
                892c2727-9cec-448c-949e-33e78e91ce2b
                © 2025 Sahragard et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 5 February 2024
                : 31 May 2024
                Page count
                Figures: 8, Tables: 8, Pages: 25
                Funding
                The author(s) received no specific funding for this work.
                Categories
                Research Article
                Research and Analysis Methods
                Mathematical and Statistical Techniques
                Mathematical Functions
                Convolution
                Biology and Life Sciences
                Neuroscience
                Cognitive Science
                Cognitive Psychology
                Attention
                Biology and Life Sciences
                Psychology
                Cognitive Psychology
                Attention
                Social Sciences
                Psychology
                Cognitive Psychology
                Attention
                Research and Analysis Methods
                Imaging Techniques
                Social Sciences
                Linguistics
                Semantics
                Computer and Information Sciences
                Computer Vision
                Ecology and Environmental Sciences
                Terrestrial Environments
                Urban Environments
                Computer and Information Sciences
                Neural Networks
                Biology and Life Sciences
                Neuroscience
                Neural Networks
                Physical Sciences
                Physics
                Classical Mechanics
                Deformation
                Physical Sciences
                Physics
                Classical Mechanics
                Damage Mechanics
                Deformation
                Custom metadata
                All relevant data are within the paper and its Supporting Information files. The authors utilized the following datasets: Cityscapes Dataset - Semantic Understanding of Urban Street Scenes ( https://www.cityscapes-dataset.com/); Stanford 2D-3D Dataset ( http://dags.stanford.edu/data/iccv09Data.tar.gz); Panoramic Dataset (DensePASS and WildPASS) ( https://github.com/elnino9ykl/WildPASS?tab=readme-ov-file).

                Uncategorized
                Uncategorized

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content202

                Most referenced authors580