Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Improved deep reinforcement learning for car-following decision-making
Tongji University, China.
Tongji University, China.
Tongji University, China.
Tsinghua University, China.
Show others and affiliations
2023 (English)In: Physica A: Statistical Mechanics and its Applications, ISSN 0378-4371, E-ISSN 1873-2119, Vol. 624, article id 128912Article in journal (Refereed) Published
Abstract [en]

Accuracy improvement of Car-following (CF) model has attracted much attention in recent years. Although a few studies incorporate deep reinforcement learning (DRL) to describe CF behaviors, proper design of reward function is still an intractable problem. This study improves the deep deterministic policy gradient (DDPG) car-following model with stacked denoising autoencoders (SDAE), and proposes a data-driven reward representation function, which quantifies the implicit interaction between ego vehicle and preceding vehicle in car-following process. The experimental results demonstrate that DDPG-SDAE model has superior ability of imitating driving behavior: (1) validating effectiveness of the reward representation method with low deviation of trajectory; (2) demonstrating generalization ability on two different trajectory datasets (HighD and SPMD); (3) adapting to three traffic scenarios clustered by a dynamic time warping distance based k-medoids method. Compared with Recurrent Neural Networks (RNN) and intelligent driver model (IDM), DDPG-SDAE model shows better performance on the deviation of speed and relative distance. This study demonstrates superiority of a novel reward extraction method fusing SDAE into DDPG algorithm and provides inspiration for developing driving decision-making model. © 2023 Elsevier B.V.

Place, publisher, year, edition, pages
Elsevier B.V. , 2023. Vol. 624, article id 128912
Keywords [en]
Car-following model, Deep reinforcement learning, Driving behavior imitation, Stacked denoising autoencoders, Behavioral research, Decision making, Learning systems, Recurrent neural networks, Auto encoders, Car-following modeling, De-noising, Deterministics, Driving behaviour, Policy gradient, Reinforcement learnings, Stacked denoising autoencoder, Reinforcement learning
National Category
Transport Systems and Logistics
Identifiers
URN: urn:nbn:se:ri:diva-65535DOI: 10.1016/j.physa.2023.128912Scopus ID: 2-s2.0-85161640179OAI: oai:DiVA.org:ri-65535DiVA, id: diva2:1776492
Note

knowledgmentsThis research was funded by the National Natural Science Foundation of China (Grant No. 71971160) and the ShanghaiScience and Technology Committee (Grant No. 19210745700).

Available from: 2023-06-28 Created: 2023-06-28 Last updated: 2023-06-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Chen, Lei

Search in DiVA

By author/editor
Chen, Lei
By organisation
Mobility and Systems
In the same journal
Physica A: Statistical Mechanics and its Applications
Transport Systems and Logistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 79 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf