A Framework to Create a Deep Learning Detector from a Small Dataset: A Case of Parawood Pith Estimation

Wattanapong Kurdthongmee, Korakot Suwannarat, Jeremy Kiplagat

Abstract


A deep learning-based object detector has been successfully applied to all application areas. It has high immunity to variations in illumination and deviations among objects. One weakness of the detector is that it requires a huge, undefinable dataset for training the detector to avoid overtraining and make it deployable. This research proposes a framework to create a deep learning-based object detector with a limited-sized dataset. The framework is based on training the detector with the regions surrounding an object that typically contain various features over a more extensive area than the object itself. Our proposed algorithm further post-processes the detection results to identify the object. The framework is applied to the problem of wood pith approximation. The YOLO v3 framework was employed to create the detector with all default hyperparameters based on the transfer learning approach. A wood pith dataset with only 150 images is used to create the detector with a ratio between training to testing of 90:10. Several experiments were performed to compare the detection results from different approaches to preparing the regions surrounding a pith, i.e., all regions, only close neighbors, and only diagonal neighbors around a pith. The best experiment result shows that the framework outperforms the typical approach to create the detector with approximately twice the detection precision at a relative average error.

 

Doi: 10.28991/ESJ-2023-07-01-017

Full Text: PDF


Keywords


Object Detection; Wood Pith Detection; YOLO Object Detection; Small Dataset Training.

References


Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001. doi:10.1109/cvpr.2001.990517.

Lowe, D. G. (1999). Object recognition from local scale-invariant features. Proceedings of the Seventh IEEE International Conference on Computer Vision. doi:10.1109/iccv.1999.790410.

Dalal, N., & Triggs, B. (n.d.). Histograms of Oriented Gradients for Human Detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). doi:10.1109/cvpr.2005.177.

Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. Lecture Notes in Computer Science, 21–37. doi:10.1007/978-3-319-46448-0_2.

Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2016.91.

Hwang, J. J., Jung, Y. H., Cho, B. H., & Heo, M. S. (2019). An overview of deep learning in the field of dentistry. Imaging Science in Dentistry, 49(1), 1–7. doi:10.5624/isd.2019.49.1.1.

Pasupa, K., & Sunhem, W. (2016). A comparison between shallow and deep architecture classifiers on small dataset. 2016 8th International Conference on Information Technology and Electrical Engineering (ICITEE). doi:10.1109/iciteed.2016.7863293.

Wang, J., & Perez, L. (2017). The effectiveness of data augmentation in image classification using deep learning. Convolutional Neural Networks Vis. Recognit, 11, 1-8.

Yang, J., Xie, Y., Liu, L., Xia, B., Cao, Z., & Guo, C. (2018). Automated Dental Image Analysis by Deep Learning on Small Dataset. 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 492-497. doi:10.1109/compsac.2018.00076.

Bae, H. J., Kim, C. W., Kim, N., Park, B. H., Kim, N., Seo, J. B., & Lee, S. M. (2018). A Perlin Noise-Based Augmentation Strategy for Deep Learning with Small Data Samples of HRCT Images. Scientific Reports, 8(1), 1–7. doi:10.1038/s41598-018-36047-2.

Oh, Y., Park, S., & Ye, J. C. (2020). Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets. IEEE Transactions on Medical Imaging, 39(8), 2688–2700. doi:10.1109/TMI.2020.2993291.

Ausawalaithong, W., Thirach, A., Marukatat, S., & Wilaiprasitporn, T. (2018). Automatic Lung Cancer Prediction from Chest X-ray Images Using the Deep Learning Approach. 2018 11th Biomedical Engineering International Conference (BMEiCON). doi:10.1109/bmeicon.2018.8609997.

Feng, S., Zhou, H., & Dong, H. (2019). Using deep neural network with small dataset to predict material defects. Materials & Design, 162, 300–310. doi:10.1016/j.matdes.2018.11.060.

Barz, B., & Denzler, J. (2020). Deep Learning on Small Datasets without Pre-Training using Cosine Loss. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). doi:10.1109/wacv45572.2020.9093286.

Rajpurkar, P., Park, A., Irvin, J., Chute, C., Bereket, M., Mastrodicasa, D., Langlotz, C. P., Lungren, M. P., Ng, A. Y., & Patel, B. N. (2020). AppendiXNet: Deep Learning for Diagnosis of Appendicitis from a Small Dataset of CT Exams Using Video Pretraining. Scientific Reports, 10(1). doi:10.1038/s41598-020-61055-6.

Brigato, L., & Iocchi, L. (2021). A Close Look at Deep Learning with Small Data. 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2490-2497. doi:10.1109/icpr48806.2021.9412492.

Huang, Y., Jing, J., & Wang, Z. (2021). Fabric Defect Segmentation Method Based on Deep Learning. IEEE Transactions on Instrumentation and Measurement, 70, 1–15. doi:10.1109/tim.2020.3047190.

Jing, J., Wang, Z., Rätsch, M., & Zhang, H. (2022). Mobile-Unet: An efficient convolutional neural network for fabric defect detection. Textile Research Journal, 92(1–2), 30–42. doi:10.1177/0040517520928604.

Ajmi, C., Zapata, J., Martínez-Álvarez, J. J., Doménech, G., & Ruiz, R. (2020). Using Deep Learning for Defect Classification on a Small Weld X-ray Image Dataset. Journal of Nondestructive Evaluation, 39(3), 1–13. doi:10.1007/s10921-020-00719-9.

Ju, J., Zheng, H., Xu, X., Guo, Z., Zheng, Z., & Lin, M. (2022). Classification of jujube defects in small data sets based on transfer learning. Neural Computing and Applications, 34(5), 3385–3398. doi:10.1007/s00521-021-05715-2.


Full Text: PDF

DOI: 10.28991/ESJ-2023-07-01-017

Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Wattanapong Kurdthongmee