It is currently unknown how automated vehicle platoons will be perceived by other road users in their vicinity. This study explores how drivers of manually operated passenger cars interact with automated passenger car platoons while merging onto a highway, and how different inter-vehicular gaps between the platooning vehicles affect their experience and safety. The study was conducted in a driving simulator and involved 16 drivers of manually operated cars. Our results show that the drivers found the interactions mentally demanding, unsafe, and uncomfortable. They commonly expected that the platoon would adapt its behavior to accommodate a smooth merge. They also expressed a need for additional information about the platoon to easier anticipate its behavior and avoid cutting-in. This was, however, affected by the gap size; larger gaps (30 and 42.5 m) yielded better experience, more frequent cut-ins, and less crashes than the shorter gaps (15 and 22.5 m). A conclusion is that a short gap as well as external human–machine interfaces (eHMI) might be used to communicate the platoon's intent to “stay together”, which in turn might prevent drivers from cutting-in. On the contrary, if the goal is to facilitate frequent, safe, and pleasant cut-ins, gaps larger than 22.5 m may be suitable. To thoroughly inform such design trade-offs, we urge for more research on this topic. © 2021 The Author(s)
There is a growing body of research in the field of interaction between automated vehicles and other road users in their vicinity. To facilitate such interactions, researchers and designers have explored designs, and this line of work has yielded several concepts of external Human-Machine Interfaces (eHMI) for vehicles. Literature and media review reveals that the description of interfaces is often lacking in fidelity or details of their functionalities in specific situations, which makes it challenging to understand the originating concepts. There is also a lack of a universal understanding of the various dimensions of a communication interface, which has impeded a consistent and coherent addressal of the different aspects of the functionalities of such interface concepts. In this paper, we present a unified taxonomy that allows a systematic comparison of the eHMI across 18 dimensions, covering their physical characteristics and communication aspects from the perspective of human factors and human-machine interaction. We analyzed and coded 70 eHMI concepts according to this taxonomy to portray the state of the art and highlight the relative maturity of different contributions. The results point to a number of unexplored research areas that could inspire future work. Additionally, we believe that our proposed taxonomy can serve as a checklist for user interface designers and researchers when developing their interfaces. © 2020 The Authors
Autonomous vehicles are becoming a reality with great potential. However, persons with blindness, deafblindness, and deafness, who usually receive information and guidance from the driver, could miss information when travelling in an autonomous car. In this case study, 15 people with hearing and vision impairments explore and compare trips with and without vibrotactile guidance (using Ready-Ride or Ready-Move) in a simulated autonomous vehicle in real traffic (using a Wizard-of-Oz method). The study investigated if vibrotactile aid could enable persons with blindness, deafblindness, and deafness to use autonomous vehicles. Different phases of a trip (before, during, and after) were analysed. The study shows that people with functional impairments such as blindness, deafblindness, and deafness can perform trips independently if given information adapted to their needs through auditory, tactile, or visual information channels. It would be difficult for the target groups to travel without any additional communication aid, such as a vibrotactile guidance aid for all phases of the trip, especially for those with blindness. In all rides with the simulated autonomous car (Wizard-of-Oz set up) without vibrotactile guidance, the driver or assistant (in at least one phase of the trip) had to intervene for the research participants with blindness to complete the trip and continue the study. The study also highlights the usability of the vibrotactile guidance aid and identifies areas in need of improvement.
Automated driving research over the past decades has mostly focused on highway environments. Recent technological developments have drawn researchers and manufacturers to look ahead at introducing automated driving in cities. The current position paper examines this challenge from the viewpoint of scientific experts. Sixteen Human Factors researchers were interviewed about their personal perspectives on automated vehicles (AVs) and the interaction with VRUs in the future urban environment. Aspects such as smart infrastructure, external human-machine interfaces (eHMIs), and the potential of augmented reality (AR) were addressed during the interviews. The interviews showed that the researchers believed that fully autonomous vehicles will not be introduced in the coming decades and that intermediate levels of automation, specific AV services, or shared control will be used instead. The researchers foresaw a large role of smart infrastructure and expressed a need for AV-VRU segregation, but were concerned about corresponding costs and maintenance requirements. The majority indicated that eHMIs will enhance future AV-VRU interaction, but they noted that implicit communication will remain dominant and advised against text-based and instructive eHMIs. AR was commended for its potential in assisting VRUs, but given the technological challenges, its use, for the time being, was believed to be limited to scientific experiments. The present expert perspectives may be instrumental to various stakeholders and researchers concerned with the relationship between VRUs and AVs in future urban traffic. © 2020 The Authors