EWU Institutional Repository

Analysis of Real Time Face Mask Detection Using Transfer Learning Method

Show simple item record

dc.contributor.author Tabassum, Syeda Fariha
dc.contributor.author Yasmin, Nilufa
dc.date.accessioned 2022-10-06T05:27:45Z
dc.date.available 2022-10-06T05:27:45Z
dc.date.issued 2022-06-06
dc.identifier.uri http://dspace.ewubd.edu:8080/handle/123456789/3746
dc.description This thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Information and Communication Engineering of East West University, Dhaka, Bangladesh en_US
dc.description.abstract Wearing a mask is one of the non-pharmaceutical strategies that can be utilized to reduce the principal source of SARS-CoV2 droplets ejected by an infected person. Regardless of debates over medical resources and mask varieties, all countries require public use of masks that cover the nose and mouth. A pre-trained Convolutional Neural Network (CNN) is used as a feature extractor for the images. CNNs are a kind of Deep Neural Network that can detect and categorize certain characteristics in images, and they're commonly employed for image analysis. To make the method as accurate as possible, the fundamental Convolutional Neural Network (CNN) model is developed using TensorFlow, Keras, Scikit-learn, OpenCV etc. Pre-processing, training a CNN, and real-time classification are the three parts of the proposed study. Several pre-trained deep Convolutional Neural Networks were developed and validated using the transfer learning approach and image augmentation (CNNs). In this paper, we are providing model proposal which will detect mask faces of individuals in real-time. Facemask detection algorithms are a subset of object detection algorithms, which are used to detect things in images. Among the numerous object detection algorithms, deep learning outperformed traditional machine learning algorithms in facemask detection because of its superior feature of extraction capacity. This paper presents a transfer learning approach for mask detection. For face mask detection three models (AlexNet, MobileNet V2, VGG-16) are being used here. Each of them has different accuracy. Moreover, applying the same model in a different way also provides different accuracy. Validation accuracy of VGG-16 is 96.67%, where it is 98.50% for MobileNet and for the proposed model it is 98%. Though AlexNet has the validation accuracy 94.26% but when the model is fitted using augmentation the accuracy rose to 97%. Not only that, along with the changing of model fitting pattern if the learning rate, batch size is also changed with accuracy seems to have 98%. And this is the highest of AlexNet. F1 score, recall and precision of MobileNet is (0.99-for with mask and 98 for without mask, 1.0-with mask; 0.99-without mask, 0.98-for with mask and 99 for without mask. For VGG 16, f1 score = 0.44-with mask/0.45-without mask, precision = 0.45, recall =0.44-with mask/0.46-without mask. For AlexNet, f1-score is 0.94- for both with and without mask, recall = 0.92 without mask, 0.96 = with mask, precision = 0.96 for without mask and 0.92 with mask. As the accuracy changes for AlexNet for using different ways to apply the algorithm, precision, recall and f1 score is also different for each state. The Javascript API facilitates camera access for real-time face mask detection.Because Google Colab operates on a web browser, it is unable to access local hardware such as a camera without the use of APIs. But through a code it is done here. en_US
dc.language.iso en_US en_US
dc.publisher East West University en_US
dc.relation.ispartofseries ;ECE00248
dc.subject Real Time Face Mask Detection, Transfer Learning Method en_US
dc.title Analysis of Real Time Face Mask Detection Using Transfer Learning Method en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account