Research Article
BibTex RIS Cite

ÇEVRESEL SESLERİN EVRİŞİMSEL SİNİR AĞLARI İLE SINIFLANDIRILMASI

Year 2023, Volume: 11 Issue: 2, 468 - 490, 01.06.2023
https://doi.org/10.36306/konjes.1201558

Abstract

Çevresel faaliyetlerin sonuçlarını tahmin edebilecek ve aynı zamanda bu faaliyetlerin ortamı hakkında bilgi edinile bilinmesi için ses verisinin kullanılması çok önemlidir. Kentlerde meydana gelen gürültü kirliliği, güvenlik sistemleri, sağlık hizmetleri ve yerel hizmetler gibi faaliyetlerin işleyişini ve temel bilgilerini elde etmek için ses verisinden faydalanılmaktadır. Bu anlamda Çevresel Seslerin Sınıflandırması (ÇSS) kritik önem kazanmaktadır. Artan veri miktarı ve çözümlemedeki zaman kısıtlamalarından dolayı anlık otomatik olarak seslerin tanımlanmasını sağlayan yeni ve güçlü yapay zekâ yöntemlerine ihtiyaç duyulmaktadır. Bu sebeple yapılan çalışmada iki farklı ÇSS veri setinin sınıflandırılması için yeni bir yötem önerilmiştir. Bu yöntemde ilk olarak sesler görüntü formatına çevrilmiştir. Daha sonra görüntü formatındaki bu sesler için özgün Evrişimsel Sinir Ağları (ESA) modelleri tasarlanmıştır. Her bir veri seti için özgün olarak tasarlanan birden fazla ESA modelleri içerisinden en yüksek doğruluk oranına sahip ESA modelleri elde edilmiştir. Bu veri setleri sırasıyla ESC10 ve UrbanSound8K veri setleridir. Bu veri setlerindeki ses kayıtları 32x32x3 ve 224x224x3 boyutuna sahip görüntü formatına çevrilmiştir. Böylelikle toplamda 4 farklı görüntü formatında veri seti elde edilmiştir. Bu veri setlerini sınıflandırılması için geliştirilen özgün ESA modelleri sırasıyla, ESC10_ESA32, ESC10_ESA224, URBANSOUND8K_ESA32 ve URBANSOUND8K_ESA224 olarak isimlendirilmiştir. Bu modeller veri setleri üzerinde 10-Kat Çapraz Doğrulama yapılarak eğitilmiştir. Elde edilen sonuçlarda, ESC10_ESA32, ESC10_ESA224, URBANSOUND8K_ESA32 ve URBANSOUND8K_ESA224 modellerinin ortalama doğruluk oranları sırasıyla %80,75, %82,25, %88,60 ve %84,33 olarak elde edilmiştir. Elde edilen sonuçlar aynı veri setleri üzerinde literatürde yapılan diğer temel çalışmalarla karşılaştırıldığında önerilen modellerin daha iyi sonuçlar elde ettiği görülmüştür.

References

  • [1] S. Chu, S. Narayanan, and C.-C. J. Kuo, "Environmental sound recognition with time–frequency audio features," IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, pp. 1142- 1158, 2009.
  • [2] F. Demir, M. Turkoglu, M. Aslan, and A. Sengur, "A new pyramidal concatenated CNN approach for environmental sound classification," Applied Acoustics, vol. 170, p. 107520, 2020.
  • [3] P. Aumond, C. Lavandier, C. Ribeiro, E. G. Boix, K. Kambona, E. D’Hondt, et al., "A study of the accuracy of mobile technology for measuring urban noise pollution in large scale participatory sensing campaigns," Applied Acoustics, vol. 117, pp. 219-226, 2017.
  • [4] J. Cao, M. Cao, J. Wang, C. Yin, D. Wang, and P.-P. Vidal, "Urban noise recognition with convolutional neural network," Multimedia Tools and Applications, vol. 78, pp. 29021-29041, 2019.
  • [5] R. Radhakrishnan, A. Divakaran, and A. Smaragdis, "Audio analysis for surveillance applications," in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005., 2005, pp. 158-161.
  • [6] M. Crocco, M. Cristani, A. Trucco, and V. Murino, "Audio surveillance: A systematic review," ACM Computing Surveys (CSUR), vol. 48, pp. 1-46, 2016.
  • [7] P. Laffitte, Y. Wang, D. Sodoyer, and L. Girin, "Assessing the performances of different neural network architectures for the detection of screams and shouts in public transportation," Expert systems with applications, vol. 117, pp. 29-41, 2019.
  • [8] H. Li, S. Ishikawa, Q. Zhao, M. Ebana, H. Yamamoto, and J. Huang, "Robot navigation and sound based position identification," in 2007 IEEE International Conference on Systems, Man and Cybernetics, 2007, pp. 2449-2454.
  • [9] R. F. Lyon, "Machine hearing: An emerging field [exploratory dsp]," IEEE signal processing magazine, vol. 27, pp. 131-139, 2010.
  • [10] S. Chu, S. Narayanan, C.-C. J. Kuo, and M. J. Mataric, "Where am I? Scene recognition for mobile robots using audio features," in 2006 IEEE International conference on multimedia and expo, 2006, pp. 885-888.
  • [11] J. Huang, "Spatial auditory processing for a hearing robot," in Proceedings. IEEE International Conference on Multimedia and Expo, 2002, pp. 253-256.
  • [12] M. Green and D. Murphy, "Environmental sound monitoring using machine learning on mobile devices," Applied Acoustics, vol. 159, p. 107041, 2020.
  • [13] P. Intani and T. Orachon, "Crime warning system using image and sound processing," in 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), 2013, pp. 1751- 1753.
  • [14] A. Agha, R. Ranjan, and W.-S. Gan, "Noisy vehicle surveillance camera: A system to deter noisy vehicle in smart city," Applied Acoustics, vol. 117, pp. 236-245, 2017.
  • [15] S. Ntalampiras, "Universal background modeling for acoustic surveillance of urban traffic," Digital Signal Processing, vol. 31, pp. 69-78, 2014.
  • [16] V. Bisot, R. Serizel, S. Essid, and G. Richard, "Feature learning with matrix factorization applied to acoustic scene classification," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, pp. 1216-1229, 2017.
  • [17] D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange, and M. D. Plumbley, "Detection and classification of acoustic scenes and events," IEEE Transactions on Multimedia, vol. 17, pp. 1733- 1746, 2015.
  • [18] P. Dhanalakshmi, S. Palanivel, and V. Ramalingam, "Classification of audio signals using AANN and GMM," Applied soft computing, vol. 11, pp. 716-723, 2011.
  • [19] J. Ludena-Choez and A. Gallardo-Antolin, "Acoustic Event Classification using spectral band selection and Non-Negative Matrix Factorization-based features," Expert Systems with Applications, vol. 46, pp. 77-86, 2016.
  • [20] J. Salamon and J. P. Bello, "Unsupervised feature learning for urban sound classification," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 171-175.
  • [21] J. T. Geiger and K. Helwani, "Improving event detection for audio surveillance using gabor filterbank features," in 2015 23rd European Signal Processing Conference (EUSIPCO), 2015, pp. 714-718.
  • [22] M. Mulimani and S. G. Koolagudi, "Segmentation and characterization of acoustic event spectrograms using singular value decomposition," Expert Systems with Applications, vol. 120, pp. 413-425, 2019.
  • [23] J. Xie and M. Zhu, "Investigation of acoustic and visual features for acoustic scene classification," Expert Systems with Applications, vol. 126, pp. 20-29, 2019.
  • [24] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
  • [25] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei, "Imagenet large scale visual recognition competition 2012 (ILSVRC2012)," See net. org/challenges/LSVRC, p. 41, 2012.
  • [26] K. J. Piczak, "Environmental sound classification with convolutional neural networks," in 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), 2015, pp. 1-6.
  • [27] J. Salamon and J. P. Bello, "Deep convolutional neural networks and data augmentation for environmental sound classification," IEEE Signal Processing Letters, vol. 24, pp. 279-283, 2017.
  • [28] N. Takahashi, M. Gygli, B. Pfister, and L. Van Gool, "Deep convolutional neural networks and data augmentation for acoustic event detection," arXiv preprint arXiv:1604.07160, 2016.
  • [29] Y. Tokozume, Y. Ushiku, and T. Harada, "Learning from between-class examples for deep sound recognition," arXiv preprint arXiv:1711.10282, 2017.
  • [30] V. Boddapati, A. Petef, J. Rasmusson, and L. Lundberg, "Classifying environmental sounds using image recognition networks," Procedia computer science, vol. 112, pp. 2048-2056, 2017.
  • [31] S. Li, Y. Yao, J. Hu, G. Liu, X. Yao, and J. Hu, "An ensemble stacked convolutional neural network model for environmental event sound recognition," Applied Sciences, vol. 8, p. 1152, 2018.
  • [32] Y. Su, K. Zhang, J. Wang, and K. Madani, "Environment sound classification using a two-stream CNN based on decision-level fusion," Sensors, vol. 19, p. 1733, 2019.
  • [33] Z. Mushtaq and S.-F. Su, "Environmental sound classification using a regularized deep convolutional neural network with data augmentation," Applied Acoustics, vol. 167, p. 107389, 2020.
  • [34] Z. Mushtaq, S.-F. Su, and Q.-V. Tran, "Spectral images based environmental sound classification using CNN with meaningful data augmentation," Applied Acoustics, vol. 172, p. 107581, 2021.
  • [35] Y. Chen, Q. Guo, X. Liang, J. Wang, and Y. Qian, "Environmental sound classification with dilated convolutions," Applied Acoustics, vol. 148, pp. 123-132, 2019.
  • [36] S. Abdoli, P. Cardinal, and A. L. Koerich, "End-to-end environmental sound classification using a 1D convolutional neural network," Expert Systems with Applications, vol. 136, pp. 252-263, 2019.
  • [37] F. Medhat, D. Chesmore, and J. Robinson, "Masked Conditional Neural Networks for sound classification," Applied Soft Computing, vol. 90, p. 106073, 2020.
  • [38] X. Zhang, Y. Zou, and W. Shi, "Dilated convolution neural network with LeakyReLU for environmental sound classification," in 2017 22nd International Conference on Digital Signal Processing (DSP), 2017, pp. 1-5.
  • [39] M. Lim, D. Lee, H. Park, Y. Kang, J. Oh, J.-S. Park, et al., "Convolutional Neural Network based Audio Event Classification," KSII Transactions on Internet & Information Systems, vol. 12, 2018.
  • [40] E. Akbal, "An automated environmental sound classification methods based on statistical and textural feature," Applied Acoustics, vol. 167, p. 107413, 2020.
  • [41] J. Salamon, C. Jacoby, and J. P. Bello, "A dataset and taxonomy for urban sound research," in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 1041-1044.
  • [42] M. Lim, D. Lee, H. Park, Y. Kang, J. Oh, J.-S. Park, et al., "Convolutional neural network based audio event classification," KSII Transactions on Internet and Information Systems (TIIS), vol. 12, pp. 2748-2760, 2018.
  • [43] A. Mesaros, T. Heittola, and T. Virtanen, "TUT database for acoustic scene classification and sound event detection," in 2016 24th European Signal Processing Conference (EUSIPCO), 2016, pp. 1128- 1132.
  • [44] Ö. İnik, "CNN hyper-parameter optimization for environmental sound classification," Applied Acoustics, vol. 202, p. 109168, 2023.
  • [45] Ö. İnik and E. Ülker, "Derin Öğrenme ve Görüntü Analizinde Kullanılan Derin Öğrenme Modelleri," Gaziosmanpaşa Bilimsel Araştırma Dergisi, vol. 6, pp. 85-104, 2017.
  • [46] D. Dev, Deep learning with hadoop: Packt Publishing Ltd, 2017.
  • [47] K. J. Piczak, "ESC: Dataset for environmental sound classification," in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 1015-1018.
  • [48] C. Sammut and G. I. Webb, Encyclopedia of machine learning: Springer Science & Business Media, 2011.
  • [49] X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, "An improved method to construct basic probability assignment based on the confusion matrix for classification problem," Information Sciences, vol. 340, pp. 250-261, 2016.
  • [50] A. Pillos, K. Alghamidi, N. Alzamel, V. Pavlov, and S. Machanavajhala, "A real-time environmental sound recognition system for the Android OS," Proceedings of Detection and Classification of Acoustic Scenes and Events, 2016.
  • [51] A. Khamparia, D. Gupta, N. G. Nguyen, A. Khanna, B. Pandey, and P. Tiwari, "Sound classification using convolutional neural network and tensor deep stacking network," IEEE Access, vol. 7, pp. 7717-7727, 2019.
  • [52] V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 39, pp. 2481-2495, 2017.
  • [53] N. Maxudov, B. Özcan, and M. F. Kıraç, "Scene recognition with majority voting among sub- section levels," in 2016 24th Signal Processing and Communication Application Conference (SIU), 2016, pp. 1637-1640.
  • [54] H. Seker and O. Inik, "CnnSound: Convolutional Neural Networks for the Classification of Environmental Sounds," in 2020 The 4th International Conference on Advances in Artificial Intelligence, 2020, pp. 79-84.

Classification of Environmental Sounds with Convolutional Neural Networks

Year 2023, Volume: 11 Issue: 2, 468 - 490, 01.06.2023
https://doi.org/10.36306/konjes.1201558

Abstract

The use of sound data is critical for predicting the effects of environmental activities and gathering information about the environment of these activities. Sound data is utilized to obtain basic information about the functioning of urban activities such as noise pollution, security systems, health care, and local services. In this sense, Environmental Sound Classification (ESC) is becoming critical. Due to the increasing amount of data and time constraints in analysis, there is a need for new and powerful artificial intelligence methods that enable instant automatic identification of sounds. These methods can be developed with Convolutional Neural Networks (CNN) models, which have achieved high accuracy rates in other fields. For this reason, in this study, a new CNN based method is proposed for the classification of two different CSR datasets. In this method, the sounds are first converted into image format. Then, novel ESA models are designed for the classification of these sounds in image format. For each dataset, the ESA models with the highest accuracy rate were obtained among the multiple ESA models designed. The datasets used in the study are ESC10 and UrbanSound8K, respectively. The sound recordings in thesedatasets were converted to image format with 32x32x3 and 224x224x3 dimensions, and four different image format datasets were obtained. The CNN models developed to classify these datasets are named ESC10_ESA32, ESC10_ESA224, URBANSOUND8K_ESA32, and URBANSOUND8K_ESA224, respectively. These models were trained on the datasets using 10-fold cross-validation. In the obtained results, the average accuracy rates of the ESC10_ESA32, ESC10_ESA224, URBANSOUND8K_ESA32, and URBANSOUND8K_ESA224 models are 80.75%, 82.25%, 88.60%, and 84.33%, respectively. When these results are compared with other baseline studies in the literature on the same datasets, it is seen that these models achieve better results.

References

  • [1] S. Chu, S. Narayanan, and C.-C. J. Kuo, "Environmental sound recognition with time–frequency audio features," IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, pp. 1142- 1158, 2009.
  • [2] F. Demir, M. Turkoglu, M. Aslan, and A. Sengur, "A new pyramidal concatenated CNN approach for environmental sound classification," Applied Acoustics, vol. 170, p. 107520, 2020.
  • [3] P. Aumond, C. Lavandier, C. Ribeiro, E. G. Boix, K. Kambona, E. D’Hondt, et al., "A study of the accuracy of mobile technology for measuring urban noise pollution in large scale participatory sensing campaigns," Applied Acoustics, vol. 117, pp. 219-226, 2017.
  • [4] J. Cao, M. Cao, J. Wang, C. Yin, D. Wang, and P.-P. Vidal, "Urban noise recognition with convolutional neural network," Multimedia Tools and Applications, vol. 78, pp. 29021-29041, 2019.
  • [5] R. Radhakrishnan, A. Divakaran, and A. Smaragdis, "Audio analysis for surveillance applications," in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005., 2005, pp. 158-161.
  • [6] M. Crocco, M. Cristani, A. Trucco, and V. Murino, "Audio surveillance: A systematic review," ACM Computing Surveys (CSUR), vol. 48, pp. 1-46, 2016.
  • [7] P. Laffitte, Y. Wang, D. Sodoyer, and L. Girin, "Assessing the performances of different neural network architectures for the detection of screams and shouts in public transportation," Expert systems with applications, vol. 117, pp. 29-41, 2019.
  • [8] H. Li, S. Ishikawa, Q. Zhao, M. Ebana, H. Yamamoto, and J. Huang, "Robot navigation and sound based position identification," in 2007 IEEE International Conference on Systems, Man and Cybernetics, 2007, pp. 2449-2454.
  • [9] R. F. Lyon, "Machine hearing: An emerging field [exploratory dsp]," IEEE signal processing magazine, vol. 27, pp. 131-139, 2010.
  • [10] S. Chu, S. Narayanan, C.-C. J. Kuo, and M. J. Mataric, "Where am I? Scene recognition for mobile robots using audio features," in 2006 IEEE International conference on multimedia and expo, 2006, pp. 885-888.
  • [11] J. Huang, "Spatial auditory processing for a hearing robot," in Proceedings. IEEE International Conference on Multimedia and Expo, 2002, pp. 253-256.
  • [12] M. Green and D. Murphy, "Environmental sound monitoring using machine learning on mobile devices," Applied Acoustics, vol. 159, p. 107041, 2020.
  • [13] P. Intani and T. Orachon, "Crime warning system using image and sound processing," in 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), 2013, pp. 1751- 1753.
  • [14] A. Agha, R. Ranjan, and W.-S. Gan, "Noisy vehicle surveillance camera: A system to deter noisy vehicle in smart city," Applied Acoustics, vol. 117, pp. 236-245, 2017.
  • [15] S. Ntalampiras, "Universal background modeling for acoustic surveillance of urban traffic," Digital Signal Processing, vol. 31, pp. 69-78, 2014.
  • [16] V. Bisot, R. Serizel, S. Essid, and G. Richard, "Feature learning with matrix factorization applied to acoustic scene classification," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, pp. 1216-1229, 2017.
  • [17] D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange, and M. D. Plumbley, "Detection and classification of acoustic scenes and events," IEEE Transactions on Multimedia, vol. 17, pp. 1733- 1746, 2015.
  • [18] P. Dhanalakshmi, S. Palanivel, and V. Ramalingam, "Classification of audio signals using AANN and GMM," Applied soft computing, vol. 11, pp. 716-723, 2011.
  • [19] J. Ludena-Choez and A. Gallardo-Antolin, "Acoustic Event Classification using spectral band selection and Non-Negative Matrix Factorization-based features," Expert Systems with Applications, vol. 46, pp. 77-86, 2016.
  • [20] J. Salamon and J. P. Bello, "Unsupervised feature learning for urban sound classification," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 171-175.
  • [21] J. T. Geiger and K. Helwani, "Improving event detection for audio surveillance using gabor filterbank features," in 2015 23rd European Signal Processing Conference (EUSIPCO), 2015, pp. 714-718.
  • [22] M. Mulimani and S. G. Koolagudi, "Segmentation and characterization of acoustic event spectrograms using singular value decomposition," Expert Systems with Applications, vol. 120, pp. 413-425, 2019.
  • [23] J. Xie and M. Zhu, "Investigation of acoustic and visual features for acoustic scene classification," Expert Systems with Applications, vol. 126, pp. 20-29, 2019.
  • [24] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
  • [25] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei, "Imagenet large scale visual recognition competition 2012 (ILSVRC2012)," See net. org/challenges/LSVRC, p. 41, 2012.
  • [26] K. J. Piczak, "Environmental sound classification with convolutional neural networks," in 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), 2015, pp. 1-6.
  • [27] J. Salamon and J. P. Bello, "Deep convolutional neural networks and data augmentation for environmental sound classification," IEEE Signal Processing Letters, vol. 24, pp. 279-283, 2017.
  • [28] N. Takahashi, M. Gygli, B. Pfister, and L. Van Gool, "Deep convolutional neural networks and data augmentation for acoustic event detection," arXiv preprint arXiv:1604.07160, 2016.
  • [29] Y. Tokozume, Y. Ushiku, and T. Harada, "Learning from between-class examples for deep sound recognition," arXiv preprint arXiv:1711.10282, 2017.
  • [30] V. Boddapati, A. Petef, J. Rasmusson, and L. Lundberg, "Classifying environmental sounds using image recognition networks," Procedia computer science, vol. 112, pp. 2048-2056, 2017.
  • [31] S. Li, Y. Yao, J. Hu, G. Liu, X. Yao, and J. Hu, "An ensemble stacked convolutional neural network model for environmental event sound recognition," Applied Sciences, vol. 8, p. 1152, 2018.
  • [32] Y. Su, K. Zhang, J. Wang, and K. Madani, "Environment sound classification using a two-stream CNN based on decision-level fusion," Sensors, vol. 19, p. 1733, 2019.
  • [33] Z. Mushtaq and S.-F. Su, "Environmental sound classification using a regularized deep convolutional neural network with data augmentation," Applied Acoustics, vol. 167, p. 107389, 2020.
  • [34] Z. Mushtaq, S.-F. Su, and Q.-V. Tran, "Spectral images based environmental sound classification using CNN with meaningful data augmentation," Applied Acoustics, vol. 172, p. 107581, 2021.
  • [35] Y. Chen, Q. Guo, X. Liang, J. Wang, and Y. Qian, "Environmental sound classification with dilated convolutions," Applied Acoustics, vol. 148, pp. 123-132, 2019.
  • [36] S. Abdoli, P. Cardinal, and A. L. Koerich, "End-to-end environmental sound classification using a 1D convolutional neural network," Expert Systems with Applications, vol. 136, pp. 252-263, 2019.
  • [37] F. Medhat, D. Chesmore, and J. Robinson, "Masked Conditional Neural Networks for sound classification," Applied Soft Computing, vol. 90, p. 106073, 2020.
  • [38] X. Zhang, Y. Zou, and W. Shi, "Dilated convolution neural network with LeakyReLU for environmental sound classification," in 2017 22nd International Conference on Digital Signal Processing (DSP), 2017, pp. 1-5.
  • [39] M. Lim, D. Lee, H. Park, Y. Kang, J. Oh, J.-S. Park, et al., "Convolutional Neural Network based Audio Event Classification," KSII Transactions on Internet & Information Systems, vol. 12, 2018.
  • [40] E. Akbal, "An automated environmental sound classification methods based on statistical and textural feature," Applied Acoustics, vol. 167, p. 107413, 2020.
  • [41] J. Salamon, C. Jacoby, and J. P. Bello, "A dataset and taxonomy for urban sound research," in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 1041-1044.
  • [42] M. Lim, D. Lee, H. Park, Y. Kang, J. Oh, J.-S. Park, et al., "Convolutional neural network based audio event classification," KSII Transactions on Internet and Information Systems (TIIS), vol. 12, pp. 2748-2760, 2018.
  • [43] A. Mesaros, T. Heittola, and T. Virtanen, "TUT database for acoustic scene classification and sound event detection," in 2016 24th European Signal Processing Conference (EUSIPCO), 2016, pp. 1128- 1132.
  • [44] Ö. İnik, "CNN hyper-parameter optimization for environmental sound classification," Applied Acoustics, vol. 202, p. 109168, 2023.
  • [45] Ö. İnik and E. Ülker, "Derin Öğrenme ve Görüntü Analizinde Kullanılan Derin Öğrenme Modelleri," Gaziosmanpaşa Bilimsel Araştırma Dergisi, vol. 6, pp. 85-104, 2017.
  • [46] D. Dev, Deep learning with hadoop: Packt Publishing Ltd, 2017.
  • [47] K. J. Piczak, "ESC: Dataset for environmental sound classification," in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 1015-1018.
  • [48] C. Sammut and G. I. Webb, Encyclopedia of machine learning: Springer Science & Business Media, 2011.
  • [49] X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, "An improved method to construct basic probability assignment based on the confusion matrix for classification problem," Information Sciences, vol. 340, pp. 250-261, 2016.
  • [50] A. Pillos, K. Alghamidi, N. Alzamel, V. Pavlov, and S. Machanavajhala, "A real-time environmental sound recognition system for the Android OS," Proceedings of Detection and Classification of Acoustic Scenes and Events, 2016.
  • [51] A. Khamparia, D. Gupta, N. G. Nguyen, A. Khanna, B. Pandey, and P. Tiwari, "Sound classification using convolutional neural network and tensor deep stacking network," IEEE Access, vol. 7, pp. 7717-7727, 2019.
  • [52] V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 39, pp. 2481-2495, 2017.
  • [53] N. Maxudov, B. Özcan, and M. F. Kıraç, "Scene recognition with majority voting among sub- section levels," in 2016 24th Signal Processing and Communication Application Conference (SIU), 2016, pp. 1637-1640.
  • [54] H. Seker and O. Inik, "CnnSound: Convolutional Neural Networks for the Classification of Environmental Sounds," in 2020 The 4th International Conference on Advances in Artificial Intelligence, 2020, pp. 79-84.
There are 54 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Research Article
Authors

Yalçın Dinçer 0000-0002-9924-5598

Özkan İnik 0000-0003-4728-8438

Publication Date June 1, 2023
Submission Date November 9, 2022
Acceptance Date March 16, 2023
Published in Issue Year 2023 Volume: 11 Issue: 2

Cite

IEEE Y. Dinçer and Ö. İnik, “ÇEVRESEL SESLERİN EVRİŞİMSEL SİNİR AĞLARI İLE SINIFLANDIRILMASI”, KONJES, vol. 11, no. 2, pp. 468–490, 2023, doi: 10.36306/konjes.1201558.