Action and Intention Recognition of Pedestrians in Urban Traffic
2019 (English)In: Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 676-682Conference paper, Published paper (Refereed)
Abstract [en]
Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2019. p. 676-682
Keywords [en]
Action recognition, Driver assistance, Intention recognition, Pedestrian, Traffic, Automobile drivers, Autonomous vehicles, Feature extraction, Learning algorithms, Machine learning, Telecommunication traffic, Traffic signals, Action and intention, Autonomous driving, Feature extraction methods, Head orientation estimations, Advanced driver assistance systems
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-38894DOI: 10.1109/SITIS.2018.00109Scopus ID: 2-s2.0-85065906502ISBN: 9781538693858 (print)OAI: oai:DiVA.org:ri-38894DiVA, id: diva2:1321989
Conference
14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, 26 November 2018 through 29 November 2018
Note
Funding details: 20140220; Funding details: Vetenskapsrådet; Funding details: VINNOVA; Funding details: American Institutes for Research; Funding text 1: This work is financed by the SIDUS AIR project of the Swedish Knowledge Foundation under the grant agreement number 20140220. Author F. A.-F. also thanks the Swedish Research Council (VR), and the Sweden’s innovation agency (VINNOVA).
2019-06-102019-06-102025-09-23Bibliographically approved