Industrial Fish Classifier with Deep Artificial Neural Network
Mahamat Ahmat Issamadine
Bartin University
https://orcid.org/0009-0006-3534-3136
Yasemin Erkan
Bartin University
https://orcid.org/0000-0002-5825-2177
Ersin Alaybeyoğlu
Bartin University
https://orcid.org/0000-0002-8318-4081
DOI: https://doi.org/10.56038/oprd.v5i1.504
Keywords: Görüntü Sınıflandırma, Akıllı Teknoloji, YOLO, Endüstriyel Uygulamalar
Abstract
Today, machine learning-based decision support systems play a facilitating role in almost every aspect of life. With the integration of these intelligent systems into industrial production systems, fast and effective production solutions emerge. Today, machine learning-based artificial intelligence technologies mostly offer solutions based on the image processing approach. In this study, a new artificial intelligence model that can effectively classify different fish species is proposed with the YOLO image processing algorithm, a deep artificial neural network based on the image processing approach. A real-time land support solution that can be easily integrated into industrial applications is presented with the model trained with an original fish data set.
References
. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review, vol. 65, no. 6, p. 386, 1958. DOI: https://doi.org/10.1037/h0042519
. A.-r. Mohamed, G. Dahl, G. Hinton et al., “Deep belief networks for phone recognition,” in Nips workshop on deep learning for speech recognition and related applications, vol.1, no. 9, 2009, p. 39.
. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
. B. Hou, C. Yang, B. Ren, and L. Jiao, “Decomposition-feature-iterative clustering-based superpixel segmentation for polsar image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 8, pp. 1239–1243, 2018. DOI: https://doi.org/10.1109/LGRS.2018.2833492
. K. ¨ U. Akdemir and E. Alaybeyo˘ glu, “Classification of red mullet, bluefish and haddock caught in the black sea by” single shot multibox detection”,” in 2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2021, pp. 1–4. DOI: https://doi.org/10.1109/INISTA52262.2021.9548488
. M. J. Shafiee, S. A. Haider, A. Wong, D. Lui, A. Cameron, A. Modhafar, P. Fieguth, and M. A. Haider, “Apparent ultra-high b-value diffusion weighted image reconstruction via hidden conditional random fields,” IEEE Transactions on Medical Imaging, vol. 34, no. 5, pp. 1111–1124, 2015. DOI: https://doi.org/10.1109/TMI.2014.2376781
. M. Luengo-Oroz, E. Faure, B. Lombardot, R. Sance, P. Bourgine, N. Peyrieras, and A. Santos, “Twister segment morphological filtering. a new method for live zebrafish embryos confocal images processing,” in 2007 IEEE International Conference on Image Processing, vol. 5, 2007, pp. V– 253–V– 256. DOI: https://doi.org/10.1109/ICIP.2007.4379813
. W. Yang, L. Zhong, Y. Chen, L. Lin, Z. Lu, S. Liu, Y. Wu, Q. Feng, and W. Chen, “Predicting ct image from mri data through feature matching with learned nonlinear local descriptors,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 977–987, 2018. DOI: https://doi.org/10.1109/TMI.2018.2790962
. G. Bannerjee, U. Sarkar, S. Das, and I. Ghosh, “Artificial intelligence in agriculture: A literature survey,” International Journal of Scientific Research in Computer Science Applications and Management Studies, vol. 7, no. 3, pp. 1–6, 2018.
. J. M. Antelis, L. E. Falc´ on et al., “Spiking neural networks applied to the classification of motor tasks in eeg signals,” Neural networks, vol. 122, pp. 130–143, 2020. DOI: https://doi.org/10.1016/j.neunet.2019.09.037
. Y. Luo, Q. Fu, J. Xie, Y. Qin, G. Wu, J. Liu, F. Jiang, Y. Cao, and X. Ding, “Eeg-based emotion classification using spiking neural networks,” IEEE Access, vol. 8, pp. 46007–46016, 2020. DOI: https://doi.org/10.1109/ACCESS.2020.2978163
. Wu, Y. Chua, and H. Li, “A biologically plausible speech recognition framework based on spiking neural networks,” in 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018, pp. 1–8. DOI: https://doi.org/10.1109/IJCNN.2018.8489535
. Wu, E. Yılmaz, M. Zhang, H. Li, and K. C. Tan, “Deep spiking neural networks for large vocabulary automatic speech recognition,” Frontiers in neuroscience, vol. 14, p. 199, 2020. DOI: https://doi.org/10.3389/fnins.2020.00199
. M. J. Shafiee, B. Chywl, F. Li, and A. Wong, “Fast yolo: A fast you only look once system for real-time embedded object detection in video,” arXiv preprint arXiv:1709.05943, 2017. DOI: https://doi.org/10.15353/vsnl.v3i1.171
. R. Huang, J. Pedoeem, and C. Chen, “Yolo-lite: a real-time object detection algorithm optimized for non-gpu computers,” in 2018 IEEE international conference on big data (big data). IEEE, 2018, pp. 2503 2510. DOI: https://doi.org/10.1109/BigData.2018.8621865
. W. Lan, J. Dang, Y. Wang, and S. Wang, “Pedestrian detection based on yolo network model,” in 2018 IEEE international conference on mechatronics and automation (ICMA). IEEE, 2018, pp. 1547–1551. DOI: https://doi.org/10.1109/ICMA.2018.8484698
. A. Wong, M. Famuori, M. J. Shafiee, F. Li, B. Chwyl, and J. Chung, “Yolo nano: A highly compact you only look once convolutional neural network for object detection,” in 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS). IEEE, 2019, pp. 22–25. DOI: https://doi.org/10.1109/EMC2-NIPS53020.2019.00013
. A. T. Azar and S. A. El-Said, “Performance analysis of support vector machines classifiers in breast cancer mammography recognition,” NeuralComputing and Applications, vol. 24, pp. 1163–1177, 2014. DOI: https://doi.org/10.1007/s00521-012-1324-4