Implementation of the Convolutional Neural Network Method to Detect the Use of Masks

Arbiati Faizah, Pujo Hari Saputro, Augusta Jannatul Firdaus, Raden Nur Rachman Dzakiyullah

Abstract


The planet has been taken seriously by Coronavirus disease since the end of 2019. Wearing a mask in public is one of the key means of security for people. Furthermore, certain public service vendors only require clients to use the service if they wear masks correctly. However, based on image processing, there is relatively little study into the discovery of face masks. Almost everybody appears to wear a mask in order to shield themselves from the COVID-19 Pandemic. Monitoring whether people in the crowd wear face masks at the most public place, such as malls, museums, parks, has become increasingly important. The development of an AI approach to deal with if the person wears a face mask and their entrance would significantly assist society. In this article, we will use a deep learning model that is then combined with Keras / TensorFlow & OpenCV, respectively CNN or Confusional Neural Network. The accuracy of the research results obtained from this model is more than 96%.

Article Metrics

Abstract: 556 Viewers PDF: 179 Viewers

Keywords


Facemask Detection, CNN, OpenCV, COVID-19

Full Text:

PDF


References


S. Feng, C. Shen, N. Xia, W. Song, M. Fan, and B. J. Cowling, “Rational use of face masks in the COVID-19 pandemic,” Lancet Respir. Med., vol. 8, no. 5, pp. 434–436, 2020, doi: 10.1016/S2213-2600(20)30134-X.

X. Liu and S. Zhang, “COVID-19: Face masks and human-to-human transmission,” Influenza Other Respi. Viruses, vol. 14, no. 4, pp. 472–473, 2020, doi: 10.1111/irv.12740.

S. E. Eikenberry et al., “To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic,” Infect. Dis. Model., vol. 5, pp. 293–308, 2020, doi: 10.1016/j.idm.2020.04.001.

Z. Wang and K. Tang, “Combating COVID-19: health equity matters,” Nat. Med., vol. 26, no. 4, p. 458, 2020, doi: 10.1038/s41591-020-0823-6.

T. Learning, “SS symmetry Within the Lack of Chest COVID-19 X-ray Dataset : A Novel Detection Model Based on GAN and Deep,” 2020.

D. M. Altmann, D. C. Douek, and R. J. Boyton, “What policy makers need to know about COVID-19 protective immunity,” Lancet, vol. 395, no. 10236, pp. 1527–1529, 2020, doi: 10.1016/S0140-6736(20)30985-5.

R. Raghavendra and C. Busch, “Novel presentation attack detection algorithm for face recognition system: Application to 3D face mask attack,” 2014 IEEE Int. Conf. Image Process. ICIP 2014, no. 1, pp. 323–327, 2014, doi: 10.1109/ICIP.2014.7025064.

G. J. Chowdary, N. S. Punn, S. K. Sonbhadra, and S. Agarwal, “Face mask detection using transfer learning of inceptionV3,” arXiv, pp. 1–11, 2020, doi: 10.1007/978-3-030-66665-1_6.

Y. Zhang, J. Gao, and H. Zhou, “Breeds Classification with Deep Convolutional Neural Network,” ACM Int. Conf. Proceeding Ser., pp. 145–151, 2020, doi: 10.1145/3383972.3383975.

B. Qin and D. Li, “Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19,” Sensors (Switzerland), vol. 20, no. 18, pp. 1–23, 2020, doi: 10.3390/s20185236.

M. S. Ejaz, M. R. Islam, M. Sifatullah, and A. Sarker, “Implementation of Principal Component Analysis on Masked and Non-masked Face Recognition,” 1st Int. Conf. Adv. Sci. Eng. Robot. Technol. 2019, ICASERT 2019, no. Icasert, 2019, doi: 10.1109/ICASERT.2019.8934543.

J. S. Park, Y. H. Oh, S. C. Ahn, and S. W. Lee, “Glasses removal from facial image using recursive error compensation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 805–811, 2005, doi: 10.1109/TPAMI.2005.103.

C. Li, R. Wang, J. Li, and L. Fei, Face detection based on YOLOv3, vol. 1031 AISC. 2020.

N. Ud Din, K. Javed, S. Bae, and J. Yi, “A Novel GAN-Based Network for Unmasking of Masked Face,” IEEE Access, vol. 8, pp. 44276–44287, 2020, doi: 10.1109/ACCESS.2020.2977386.

R. Paredes, J. S. Cardoso, and X. M. Pardo, “Pattern recognition and image analysis: 7th Iberian conference, IbPRIA 2015 Santiago de Compostela, Spain, june 17–19, 2015 proceedings,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9117, pp. 138–145, 2015, doi: 10.1007/978-3-319-19390-8.

M. K. J. Khan, N. Ud Din, S. Bae, and J. Yi, “Interactive removal of microphone object in facial images,” Electron., vol. 8, no. 10, 2019, doi: 10.3390/electronics8101115.

P. L. Tu and J. Y. Chung, “A new decision-tree classification algorithm for machine learning,” Proc. - Int. Conf. Tools with Artif. Intell. ICTAI, vol. 1992-November, pp. 370–377, 1992, doi: 10.1109/TAI.1992.246431.

A. Cutler, D. R. Cutler, and J. R. Stevens, “Ensemble Machine Learning,” Ensemble Mach. Learn., 2012, doi: 10.1007/978-1-4419-9326-7.

E. Mangalova and E. Agafonov, “Wind power forecasting using the k-nearest neighbors algorithm,” Int. J. Forecast., vol. 30, no. 2, pp. 402–406, 2014, doi: 10.1016/j.ijforecast.2013.07.008.

I. Naseem, R. Togneri, and M. Bennamoun, “Robust regression for face recognition,” Proc. - Int. Conf. Pattern Recognit., vol. 32, no. 11, pp. 1156–1159, 2010, doi: 10.1109/ICPR.2010.289.

T. Meenpal, A. Balakrishnan, and A. Verma, “Facial Mask Detection using Semantic Segmentation,” 2019 4th Int. Conf. Comput. Commun. Secur. ICCCS 2019, pp. 1–5, 2019, doi: 10.1109/CCCS.2019.8888092.


Refbacks

  • There are currently no refbacks.



Barcode

International Journal of Informatics and Information Systems (IJIIS)

2579-7069 (Online)
Organized by Information System Department - Universitas Amikom Purwokerto - Indonesia, Laboratoire Signaux Et Systèmes (L2s) - Université Paris 13 - FranceAsosiasi Perguruan Tinggi Informatika dan Ilmu Komputer (APTIKOM) and Bright Publisher
Published by Bright Publisher
Website : ijiis.org or bright-journal.org/ijiis
Email : ijiis@bright-journal.org (paper handling issues)
           taqwa@bright-journal.org (publication issues)
           andhika@bright-journal.org (technical issues)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0