Classifying Operator Experience from Electric Screwdriving Signals: A BiLSTM-Based Study with External Validation

Main Article Content

Kader Nikbay Oylum
Turgay Tugay Bilgin

Abstract

This study presents a deep learning–based approach to objectively classify operator experience levels (Novice–Intermediate–Expert) from multivariate signals and user interactions obtained during electric screwdriving operations. The dataset comprises 64 participant-specific files, each containing multiple tightening trials. Windowing was performed independently per file; short segments unsuitable for windowing were excluded, yielding 3,326 time windows (2,958 for training/testing; 368 for independent validation). A two-layer Bidirectional LSTM (BiLSTM) architecture was employed and evaluated on both the train–test split and an external validation set constructed from 12 previously unseen files. On the test set, the model achieved 76% overall accuracy with macro-averaged precision/recall/F1 of 77%/76%/76%. Class-wise analysis indicated stronger separability for the Expert class (recall ≈ 84%) and comparatively lower performance for Intermediate (recall ≈ 66%). On the hold-out validation set, accuracy was 75.00%, with a mean predicted probability of 85.0%, indicating moderate-to-high confidence. The findings show that while BiLSTM provides a solid foundation for time-series classification, its effectiveness may be limited for complex patterns without a convolutional front end.

Downloads

Download data is not yet available.

Article Details

How to Cite
Nikbay Oylum, K., & Bilgin, T. T. (2025). Classifying Operator Experience from Electric Screwdriving Signals: A BiLSTM-Based Study with External Validation. The European Journal of Research and Development, 5(1), 201–221. https://doi.org/10.56038/ejrnd.v5i1.669
Section
Articles

References

Shin, J., Al, M., Maniruzzaman, M., Nishimura, S., & Alfarhood, S. (2025). Video-Based Human Activity Recognition Using Hybrid Deep Learning Model. Computer Modeling in Engineering & Sciences, 143(3), 3615. DOI: https://doi.org/10.32604/cmes.2025.064588

Huafeng, G., Changcheng, X., & Shiqiang, C. (2023). Wearable sensors for human activity recognition based on a self-attention CNN-BiLSTM model. Sensor Review, 43(5/6), 347-358. DOI: https://doi.org/10.1108/SR-10-2022-0398

Li, Y., & Wang, L. (2022). Human activity recognition based on residual network and BiLSTM. Sensors, 22(2), 635. DOI: https://doi.org/10.3390/s22020635

Lalwani, P., & Ganeshan, R. (2024). A novel CNN-BiLSTM-GRU hybrid deep learning model for human activity recognition. International Journal of Computational Intelligence Systems, 17(1), 278. DOI: https://doi.org/10.1007/s44196-024-00689-0

Aljarrah, A. A., & Ali, A. H. (2019, August). Human activity recognition using PCA and BiLSTM recurrent neural networks. In 2019 2nd International Conference on Engineering Technology and its Applications (IICETA) (pp. 156-160). IEEE. DOI: https://doi.org/10.1109/IICETA47481.2019.9012979

Ridha, A. A., Almaameri, I., Blázovics, L., & Abbas, H. M. (2023, July). Human activity recognition by BiLSTM recurrent neural networks and support vector machine. In 2023 6th International Conference on Engineering Technology and its Applications (IICETA) (pp. 459-465). IEEE. DOI: https://doi.org/10.1109/IICETA57613.2023.10351372

Modukuri, S. V., Mogaparthi, N., Burri, S., & Kalangi, R. K. (2024, September). Bi-LSTM based real-time human activity recognition from smartphone sensor data. In 2024 International Conference on Artificial Intelligence and Emerging Technology (Global AI Summit) (pp. 474-479). IEEE. DOI: https://doi.org/10.1109/GlobalAISummit62156.2024.10947907

Zhang, J., Liu, Y., & Yuan, H. (2023). Attention-based residual BiLSTM networks for human activity recognition. IEEE Access, 11, 94173-94187. DOI: https://doi.org/10.1109/ACCESS.2023.3310269

Lalwani, P., & Ramasamy, G. (2024). Human activity recognition using a multi-branched CNN-BiLSTM-BiGRU model. Applied Soft Computing, 154, 111344. DOI: https://doi.org/10.1016/j.asoc.2024.111344

Mekruksavanich, S., Phaphan, W., & Jitpattanakul, A. (2025). A Deep Multi-Task Learning Network for Activity Recognition and User Identification Using Smartphone Sensors. Procedia Computer Science, 256, 1350-1357. DOI: https://doi.org/10.1016/j.procs.2025.02.248