An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement LearningShow others and affiliations
2022 (English)In: Electronics, E-ISSN 2079-9292, Vol. 11, no 24, article id 4097Article in journal (Refereed) Published
Abstract [en]
With the rapid development of artificial intelligence technology, the deep learning method has been introduced for vehicle trajectory prediction in the internet of vehicles, since it provides relative accurate prediction results, which is one of the critical links to guarantee security in the distributed mixed-driving scenario. In order to further enhance prediction accuracy by making full utilization of complex traffic scenes, an improved multimodal trajectory prediction method based on deep inverse reinforcement learning is proposed. Firstly, a fused dilated convolution module for better extracting raster features is introduced into the existing multimodal trajectory prediction network backbone. Then, a reward update policy with inferred goals is improved by learning the state rewards of goals and paths separately instead of original complex rewards, which can reduce the requirement for predefined goal states. Furthermore, a correction factor is introduced in the existing trajectory generator module, which can better generate diverse trajectories by penalizing trajectories with little difference. Abundant experiments on the current popular public dataset indicate that the prediction results of our proposed method are a better fit with the basic structure of the given traffic scenario in a long-term prediction range, which verifies the effectiveness of our proposed method. © 2022 by the authors.
Place, publisher, year, edition, pages
MDPI , 2022. Vol. 11, no 24, article id 4097
Keywords [en]
dilated convolution, maximum entropy inverse reinforcement learning (MaxEnt RL), multimodal trajectory prediction, rasterization
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:ri:diva-62579DOI: 10.3390/electronics11244097Scopus ID: 2-s2.0-85144907401OAI: oai:DiVA.org:ri-62579DiVA, id: diva2:1729412
Note
Funding details: 2019-03418; Funding details: National Natural Science Foundation of China, NSFC, 52172379, 62001058; Funding details: National Key Research and Development Program of China, NKRDPC, 2019YFE0108300; Funding details: Fundamental Research Funds for the Central Universities, 300102241201, 300102242806, 300102242901; Funding text 1: This research was funded by the National Key R&D Program of China (2019YFE0108300), the National Natural Science Foundation of China (62001058, 52172379), the Fundamental Research Funds for the Central Universities (300102241201, 300102242901, 300102242806), and the Swedish Innovation Agency VINNOVA (2019-03418).
2023-01-202023-01-202023-05-25Bibliographically approved