Item request has been placed!
×
Item request cannot be made.
×
Processing Request
ConvFishNet: An efficient backbone for fish classification from composited underwater images.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Author(s): Qu, Huishan1 (AUTHOR) ; Wang, Gai-Ge1,2 (AUTHOR) ; Li, Yun1,2,3 (AUTHOR) ; Qi, Xin3 (AUTHOR) ; Zhang, Mengjie4 (AUTHOR)
- Source:
Information Sciences. Sep2024, Vol. 679, pN.PAG-N.PAG. 1p.
- Subject Terms:
- Additional Information
- Abstract:
For the purpose of monitoring fish health, managing aquaculture, and comprehending marine ecology, there is a growing interest in the automatic classification of different fish species. Recent developments in machine vision-based classification methods, known for their speed and non-destructive nature, have led to the introduction of various automatic categorization approaches. Drawing inspiration from FishNet, a new architecture model named ConvFishNet has been proposed. This model incorporates large convolutional kernels and depth-wise separable convolutions to reduce the number of parameters in the model. Additionally, the PixelShuffle has been developed to enhance the upsampling information and improve fish classification performance. While maintaining precision in fish classification, the model achieves a lightweight design with only 0.83G. Compared with the FishNet model, the new model reduces parameters by 80 %. The model demonstrates a precision of 88.44 % on the WildFish dataset subset and 99.8 % on the Fish4knowledge dataset, surpassing existing methods including FishNet. This method shows promise for fish classification in challenging underwater environments, such as marine and aquaculture settings, and further investigation is planned for the future. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of Information Sciences is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.