Choi Wansuk, Heo Seoyoon
Department of Physical Therapy, International University of Korea, Jinju 52833, Korea.
Department of Occupational Therapy, School of Medical and Health Science, Kyungbok University, Namyangju-si 12051, Korea.
Healthcare (Basel). 2021 Nov 18;9(11):1579. doi: 10.3390/healthcare9111579.
The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.
本研究的目的是通过使用预训练的深度学习模型进行迁移学习来对上肢张力测试(ULTT)视频进行分类,并比较各模型的性能。我们通过将预训练的卷积神经网络(CNN)模型整合到Python生成的深度学习过程中来进行迁移学习。视频在YouTube上进行处理,并对从视频片段转换而来的103,116帧进行分析。在建模实现中,依次应用了导入所需模块、进行训练所需的数据预处理、定义模型、编译、创建模型以及拟合模型的过程。比较模型有Xception、InceptionV3、DenseNet201、NASNetMobile、DenseNet121、VGG16、VGG19和ResNet101,并进行了微调。它们在高性能计算环境中进行训练,并将验证和损失作为性能的比较指标进行测量。Xception、InceptionV3和DenseNet201模型获得了相对较低的验证损失和较高的验证准确率,与其他模型相比,被评估为优秀模型。另一方面,与其他模型相比,VGG16、VGG19和ResNet101获得了相对较高的验证损失和较低的验证准确率。Xception、InceptionV3和DensNet201模型的验证准确率和验证损失之间的差异范围较窄。本研究表明,应用迁移学习进行训练可以对ULTT视频进行分类,并且各模型之间存在性能差异。