Sangam: A Confluence of Knowledge Streams

Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data

Show simple item record

dc.contributor Civil and Environmental Engineering
dc.creator Elhenawy, Mohammed
dc.creator Ashqar, Huthaifa I.
dc.creator Masoud, Mahmoud
dc.creator Almannaa, Mohammed H.
dc.creator Rakotonirainy, Andry
dc.creator Rakha, Hesham A.
dc.date 2020-10-27T13:18:15Z
dc.date 2020-10-27T13:18:15Z
dc.date 2020-10-25
dc.date 2020-10-26T14:25:37Z
dc.date.accessioned 2023-03-01T18:53:02Z
dc.date.available 2023-03-01T18:53:02Z
dc.identifier Elhenawy, M.; Ashqar, H.I.; Masoud, M.; Almannaa, M.H.; Rakotonirainy, A.; Rakha, H.A. Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data. Remote Sens. 2020, 12, 3508.
dc.identifier http://hdl.handle.net/10919/100718
dc.identifier https://doi.org/10.3390/rs12213508
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/281682
dc.description As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.
dc.description Published version
dc.format application/pdf
dc.format application/pdf
dc.language en
dc.publisher MDPI
dc.rights Creative Commons Attribution 4.0 International
dc.rights http://creativecommons.org/licenses/by/4.0/
dc.subject transportation mode classification
dc.subject vulnerable road users
dc.subject recurrence plots
dc.subject computer vision
dc.subject image classification system
dc.title Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data
dc.title Remote Sensing
dc.type Article - Refereed
dc.type Text
dc.type StillImage


Files in this item

Files Size Format View
remotesensing-12-03508-v2.pdf 2.409Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse