One of the bottlenecks of automated driving technologies is safe and socially acceptable interactions with human-driven vehicles, for example during merging. Driver models that provide accurate predictions of joint and individual driver behaviour of high-level decisions, safety margins, and low-level control inputs are required to improve the interactive capabilities of automated driving. Existing driver models typically focus on one of these aspects. Unified models capturing all aspects are missing which hinders understanding of the principles that govern human traffic interactions. This in turn limits the ability of automated vehicles to resolve merging interactions. Here, we present a communication-enabled interaction model based on risk perception with the potential to capture merging interactions on all three levels. Our model accurately describes human behaviour in a simplified merging scenario, addressing both individual actions (such as velocity adjustments) and joint actions (such as the order of merging). Contrary to other interaction models, our model does not assume humans are rational and explicitly accounts for communication between drivers. Our results demonstrate that communication and risk-based decision-making explain observed human interactions on multiple levels. This explanation improves our understanding of the underlying mechanisms of human traffic interactions and poses a step towards interaction-aware automated driving.
Traffic interactions between merging and highway vehicles are a major topic of research, yielding many empirical studies and models of driver behaviour. Most of these studies on merging use naturalistic data. Although this provides insight into human gap acceptance and traffic flow effects, it obscures the operational inputs of interacting drivers. Besides that, researchers have no control over the vehicle kinematics (i.e., positions and velocities) at the start of the interactions. Therefore the relationship between initial kinematics and the outcome of the interaction is difficult to investigate. To address these gaps, we conducted an experiment in a coupled driving simulator with a simplified, top-down view, merging scenario with two vehicles. We found that kinematics can explain the outcome (i.e., which driver merges first) and the duration of the merging conflict. Furthermore, our results show that drivers use key decision moments combined with constant acceleration inputs (intermittent piecewise-constant control) during merging. This indicates that they do not continuously optimize their expected utility. Therefore, these results advocate the development of interaction models based on intermittent piecewise-constant control. We hope our work can contribute to this development and to the fundamental knowledge of interactive driver behaviour.
A major challenge for autonomous vehicles is handling interactions with human-driven vehicles—for example, in highway merging. A better understanding and computational modelling of human interactive behaviour could help address this challenge. However, existing modelling approaches predominantly neglect communication between drivers and assume that one modelled driver in the interaction responds to the other, but does not actively influence their behaviour. Here, we argue that addressing these two limitations is crucial for the accurate modelling of interactions. We propose a new computational framework addressing these limitations. Similar to game-theoretic approaches, we model a joint interactive system rather than an isolated driver who only responds to their environment. Contrary to game theory, our framework explicitly incorporates communication between two drivers and bounded rationality in each driver’s behaviours. We demonstrate our model’s potential in a simplified merging scenario of two vehicles, illustrating that it generates plausible interactive behaviour (e.g. aggressive and conservative merging). Furthermore, human-like gap-keeping behaviour emerged in a car-following scenario directly from risk perception without the explicit implementation of time or distance gaps in the model’s decision-making. These results suggest that our framework is a promising approach to interaction modelling that can support the development of interaction-aware autonomous vehicles.
With the rapid development of automated driving systems, human drivers will soon have to share the road, and interact with, autonomous vehicles (AVs). To design AVs that can be safely introduced in our mixed traffic, research into human-AV interaction is needed. Driving simulators are invaluable tools, however, existing driving simulators are expensive, implementing new functionalities (e.g., AV control algorithms) can be difficult, and setting up new experiments can be tedious. To address these issues, we present JOAN, a framework for human-AV interaction experiments. JOAN connects to the open-source AV-simulation program Carla and enables quick and easy implementation of simple driving experiments. It supports experiments in VR, a variety of human input devices, and haptic feedback. JOAN is easy to use, flexible, and enables researchers to design, store, and perform human-factors experiments without writing a single line of code.
Recently, multiple naturalistic traffic datasets of human-driven trajectories have been published (e.g., highD, NGSim, and pNEUMA). These datasets have been used in studies that investigate variability in human driving behavior, for example for scenario-based validation of autonomous vehicle (AV) behavior, modeling driver behavior, or validating driver models. Thus far, these studies focused on the variability on an operational level (e.g., velocity profiles during a lane change), not on a tactical level (i.e., to change lanes or not). Investigating the variability on both levels is necessary to develop driver models and AVs that include multiple tactical behaviors. To expose multi-level variability, the human responses to the same traffic scene could be investigated. However, no method exists to automatically extract similar scenes from datasets. Here, we present a four-step extraction method that uses the Hausdorff distance, a mathematical distance metric for sets. We performed a case study on the highD dataset that showed that the method is practically applicable. The human responses to the selected scenes exposed the variability on both the tactical and operational levels. With this new method, the variability in operational and tactical human behavior can be investigated, without the need for costly and time-consuming driving-simulator experiments.
A major challenge for autonomous vehicles is interacting with other traffic participants safely and smoothly. A promising approach to handle such traffic interactions is equipping autonomous vehicles with interactionaware controllers (IACs). These controllers predict how surrounding human drivers will respond to the autonomous vehicle’s actions, based on a driver model. However, the predictive validity of driver models used in IACs is rarely validated, which can limit the interactive capabilities of IACs outside the simple simulated environments in which they are demonstrated. In this article, we argue that besides evaluating the interactive capabilities of IACs, their underlying driver models should be validated on natural human driving behavior. We propose a workflow for this validation that includes scenario-based data extraction and a two-stage (tactical/operational) evaluation procedure based on human factors literature. We demonstrate this workflow in a case study on an inverse-reinforcement-learning-based driver model replicated from an existing IAC. This model only showed the correct tactical behavior in 40% of the predictions. The model’s operational behavior was inconsistent with observed human behavior. The case study illustrates that a principled evaluation workflow is useful and needed. We believe that our workflow will support the development of appropriate driver models for future automated vehicles
Human highway-merging behavior is an important aspect when developing autonomous vehicles (AVs) that can safely and successfully interact with other road users. To design safe and acceptable human-AV interactions, the underlying mechanisms in human-human interactive behavior need to be understood. Exposing and understanding these mechanisms can be done using controlled driving simulator experiments. However, until now, such human-factors merging experiments have focused on aspects of the behavior of a single driver (e.g., gap acceptance) instead of on the dynamics of the interaction. Furthermore, existing experimental scenarios and data-analysis tools (e.g., concepts like time-to-collision) are insufficient to analyze human-human interactive merging behavior. To help facilitate human-factors research on merging interactions, we propose an experimental framework consisting of a general simplified merging scenario and a set of three analysis tools: (1) a visual representation that captures the combined behavior of two participants and the safety margins they maintain in a single plot; (2) a signal (over time) that describes the level of conflict; and (3) a metric that describes the amount of time that was required to solve the merging conflict, called the conflict resolution time. In a case study with 18 participants, we used the proposed framework and analysis tools in a top-down view driving simulator where two human participants can interact. The results show that the proposed scenario can expose diverse behaviors for different conditions. We demonstrate that our novel visual representation, conflict resolution time, and conflict signal are valuable tools when comparing human behavior between conditions. Therefore, with its simplified merging scenario and analysis tools, the proposed experimental framework can be a valuable asset when developing driver models that describe interactive merging behavior and when designing AVs that interact with humans.
In recent years, multiple datasets containing traffic recorded in the real world and containing human-driven trajectories have been made available to researchers. Among these datasets are the HighD, pNEUMA, and NGSIM datasets. TraViA, an open-source Traffic data Visualization and Annotation tool was created to provide a single environment for working with data from these three datasets. Combining the data in a single visualization tool enables researchers to easily study data from all sources. TraViA was designed in such a way that it can easily be extended to visualize data from other datasets and that specific needs for research projects are easily implemented.
Exoskeletons are a promising technology that enables individuals with mobility limitations to walk again. As the 2016 Cybathlon illustrated, however, the community has a considerable way to go before exoskeletons have the necessary capabilities to be incorporated into daily life. While most exoskeletons power only hip and knee flexion, Team Institute for Human and Machine Cognition (IHMC) presents a new exoskeleton, Mina v2, which includes a powered ankle dorsi/plantar flexion (Figure 1). As our entry to the 2016 Cybathlon Powered Exoskeleton Competition, Mina v2’s performance allowed us to explore the effectiveness of its powered ankle compared to other powered exoskeletons for pilots with paraplegia. We designed our gaits to incorporate powered ankle plantar flexion to help improve mobility, which allowed our pilot to navigate the given Cybathlon tasks quickly, including those that required ascending movements, and reliably achieve average, conservative walking speeds of 1.04 km/h (0.29 m/s). This enabled our team to place second overall in the Powered Exoskeleton Competition in the 2016 Cybathlon.