Dagstuhl-Seminar 22102
Computational Models of Human-Automated Vehicle Interaction
( 06. Mar – 11. Mar, 2022 )
Permalink
Organisatoren
- Martin Baumann (Universität Ulm, DE)
- Shamsi Tamara Iqbal (Microsoft - Redmond, US)
- Christian P. Janssen (Utrecht University, NL)
- Antti Oulasvirta (Aalto University, FI)
Kontakt
- Michael Gerke (für wissenschaftliche Fragen)
- Jutka Gasiorowski (für administrative Fragen)
Impacts
- Modeling human road crossing decisions as reward maximization with visual perception limitations - Wang, Yueyang; Srinivasan, Aravinda Ramakrishnan; Jokinen, Jussi P. P.; Oulasvirta, Antti; Markkula, Gustav - Cornell University : arXiv.org, 2023. - 6 pp..
- Computational Cognitive Models for Human-Automated Vehicle Interaction : special issue - Baumann, Martin; Oulasvirta, Antti; Janssen, Christian P. - Amsterdam : Elsevier, 2024.
Programm
This is the executive summary of Dagstuhl 22102: Computational Models of Human-Automated Vehicle Interaction, which took place March 6-11th 2022 in Hybrid format. The executive summary first summarizes the motivation of the seminar, then gives an overview of the broad challenges that were discussed, it then presents the results of the seminar. As this is only the summary, there are a lot more details about every item and result in other parts of this report, these are therefore referred to.
It has been a fruitful meeting, which sparked many research ideas. We want to thank all the attendees for their attendance and all the input they generate. We hope that it is of value to the community, and we can't wait to see what other results follow in the future based on discussions that started at this seminar!
Christian Janssen, Martin Baumann, Antti Oulasvirta, and Shamsi Iqbal (organizers)
Computational Models of Human-Automated Vehicle Interaction: Summary of the field
The capabilities of automated vehicles are rapidly increasing, and are changing human interaction considerably (e.g., [4, 6, 29]). Despite this technological progress, the path to fully self-driving vehicles without any human intervention is long, and for the foreseeable future human interaction is still needed with automated vehicles (e.g., [15, 22, 29, 37, 48, 47]). The principles of human-automation interaction also guide the future outlook of the European Commission [13, 14]. Human-automated vehicle interaction can take at least two forms. One form is a partnership, in which the human and the automated vehicle both contribute in parallel to the control of the vehicle. Another form is in transitions of control, where the automated system at times takes over full control of the vehicle, but transitions control back to the human when desired by the human, or when required due to system limitations. For both the partnership and the transition paradigm it is beneficial when the car and the human have a good model of each other’s capabilities and limitations. Accurate models can make clear how tasks are distributed between the human and the machine. This helps avoid misunderstandings, or mode confusion [45], and thereby reduces the likelihood of accidents and incidents. A key tool in this regard is the use of computational (cognitive) models: computational instantiations that simulate the human thought process and/or their interaction with an automated vehicle. Computational models build on a long tradition in cognitive science (e.g., [35, 36, 44]), human factors and human-computer interaction (e.g., [10, 39, 27], neuroscience (e.g., [12, 31]), and AI and engineering (e.g., [17, 42]). By now, there are a wide set of varieties that can be applied to different domains, ranging from constrained theoretical problems to capturing real-world interaction [38]. Computational models have many benefits. They enforce a working ethic of “understanding by building” and require precision in specification ([34], see also [8, 32, 41]). Models can test the impact of changes in parameters and assumptions, which allows for wider applicability and scalability (e.g., [2, 16, 44]). More generally, this allows for testing “what if” scenarios. For human-automated vehicle interaction in particular, it allows testing of future adaptive systems that are not yet on the road. Automated driving is a domain where computational models can be applied. Three approaches have only started to scratch the surface. First, the large majority of models focus on engineering aspects (e.g., computer vision, sensing the environment, flow of traffic) that do not consider the human extensively (e.g., [7, 18, 33]). Second, models that focus on the human mostly capture manual, non-automated driving (e.g., [44, 9, 25]). Third, models about human interaction in automated vehicles are either conceptual (e.g. [20, 22]) or qualitative, and do not benefit from the full set of advantages that computational models offer. In summary, there is a disconnect between the power and capabilities that computational models offer for the domain of automated driving, and today’s state-of-the-art research. This is due to a set of broad challenges that the field is facing and that need to be tackled over the next 3-10 years, which we will discuss next.
Description of the seminar topics and structure of the seminar report
The seminar topics were clustered around five broad challenges, for which we provide a brief description and example issues that were discussed addressed. Although the challenges are presented separately, they are interconnected and were discussed in an integrated manner during the seminar. During the seminar, each challenge was discussed in a panel, with all attendees taking part in at least one panel. After each panel, the group was split up in smaller workgroups, and discussed the themes in more lengths. The summary of each panel discussion can be found later in this report under the section "panel discussions". The outcomes of the workgroups can be found later in this report under the section "workgroups". In addition, all attendees wrote short abstracts that summarized their individual position.
Challenge 1: How can models inform design and governmental policy?
Models are most useful if they are more than abstract, theoretical vehicles. They should not live in a vacuum, but be related to problems and issues in the real world. Therefore, we want to explicitly discuss how models can inform the design of (in-)vehicle technology, and how they can inform policy. As both of these topics can fill an entire Dagstuhl by themselves, our primary objective is to identify the most pressing issues and opportunities. For example, looking at:
- Types of questions: what types of questions exist at a design and policy level about human-automated vehicle interaction?
- How to inform decisions: How can models be used to inform design and policy decisions? What level of detail is needed here? What are examples of good practices?
- Integration: Integration can be considered in multiple ways. First, how can ideas from different disciplines be integrated (e.g., behavioral sciences, engineering, economics), even if they have at times opposing views (e.g., monetary gains versus accuracy and rigor)? Second, how can models become better integrated in the design and development process as tools to evaluate prototypes (instead of running empirical tests)? And third, how can models be integrated into the automation (e.g., as a user model) to broaden the automation functionality (e.g., prediction of possible driver actions, time needed to take over)?
Challenge 2: What phenomena and driving scenarios need to be captured?
The aim here is to both advance theory on human-automation interaction while also contributing to understanding realistic case studies for human-automation interaction that are faced for example by industry and governments. The following are example phenomena:
- Transitions of control and dynamic attention: When semi-automated vehicles transition control of the car back to the human, they require accurate estimates of a user’s attention level and capability to take control (e.g., [22, 49]).
- Mental models, machine models, mode confusion, and training and skill: Models can be used to estimate human’s understanding of the machine and vice-versa (e.g., [20]). Similarly, they might be used to estimate a human driver’s skill level, and whether training is desired.
- Shared control: In all these scenarios, there is some form of shared control. Shared control requires a mutual understanding of human and automation. Computational models can be used to provide such understanding for the automation (e.g., [50]).
Challenge 3: What technical capabilities do computational models possess?
A second challenge has to do with the technical capabilities of the models. Although the nature of different modeling frameworks and different studies might differ [38], what do we consider the core functionality? For example, related to:
- Compatibility: To what degree do models need to be compatible with simulator software (e.g., to test a “virtual participant”), hardware (e.g., be able to drive a car on a test track), and other models of human thinking?
- Adaptive nature: Computational models aim to strike a balance between precise predictions for more static environments and being able to handle open-ended dynamic environments (like everyday traffic). How can precision be guaranteed in static and dynamic environments? How can models adapt to changing circumstances?
- Speed of development and broader adoption: The development of computational models requires expertise and time. How can development speed be improved? How can communities benefit from each other’s expertise?
Challenge 4: How can models benefit from advances in AI while avoiding pitfalls?
At the moment there are many developments in AI that computational models can benefit from. Three examples are advances in (1) simulator-based inference (e.g., [26]) to reason about possible future worlds (e.g., varieties of traffic environments), (2) reinforcement learning [46] and its application to robotics [30] and human driving [25], and (3) deep learning [17] and its potential to predict driver state or behavior from sensor data. At the same time, incorporation of AI techniques also comes with challenges that need to be addressed. For example:
- Explainability: Machine learning techniques are good at classifying data, but do not always provide insight into why classifications are made. This limits their explainability and is at odds with the objective of computational models to gain insight into human behavior. How can algorithms’ explainability be improved?
- Scalability and generalization: How can models be made that are scalable to other domains and that are not overtrained on specific instances? How can they account for future scenarios where human behavior might be hard to predict [5]?
- System training and corrective feedback: if models are trained on a dataset, what is the right level of feedback to correct an incorrect action to the model? How can important new instances and examples be given more weight to update the model’s understanding without biasing the impact?
Challenge 5: What insights are needed for and from empirical research?
Models are only as good to the degree as they can describe and predict phenomena in the real world. Therefore, empirical tests are an important consideration. Example considerations are:
- Capturing behavioral change and long-term phenomena: Many current computational models capture the results of a single experiment. However, behavior might change with more exposure to and experience with automated technology. How can such (long-term) behavior change be tested?
- Capturing unknown future scenarios: Many automated technologies that might benefit from computational models are not yet commercially available. How can these best be studied and connected to computational models?
- Simulated driving versus real-world encounters: To what degree are simulator tests representative of real-world scenarios (e.g., [19])?
Results
The seminar has generated the following results.
- Overview of state-of-the-art technologies, methods, and models. The spectrum of computational modeling techniques is large [38, 21, 24]. Before and during the conference, we have discussed various methods and techniques. In particular, this report contains a dedicated chapter called “Relevant papers for modeling human-automated vehicle interaction” in which we report a long set of papers that the community identified as being relevant to the field. We encourage scholars to take a look at it.
- List of grand challenges with solution paths. We have identified five grand challenges and discussed those in detail during the panels. Our chapters on “panel discussions” report the outcomes of these discussions. Moreover, the workgroups further report the in-depth discussions that smaller groups had about these challenges. See the section “working group” of this report. The results only start to scratch the surface of some of the grand challenges for the application of computational cognitive modeling that need to be faced within the next 3 to 10 years, and their paths to solutions. Based on discussions, groups of authors plan to work on more papers and workshops around topics that they deemed worthy of further discussion. For example, we discussed whether there are specific driving scenarios that a computational model should be able to capture, and how success might be quantified (e.g., whether these challenges should take the form of competitions, akin to DARPA’s Grand Challenge for automated vehicles [11] or “Newell’s test” for cognitive models [3]).
- Research agenda to further the field. This report also reports a research agenda that is intended to further the field. For each specific grand challenge, we have identified more specific areas of research that need futher exploration. We refer to the dedicated section in this report called “Research agenda to further the field”. The organizers of the seminar will also organize a dedicated journal special issue around the topic, in which further results that arose from the seminar can be reported.
References
- Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Iqbal, S.T., and Teevan, J. (2019). Guidelines for human-AI interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1-13).
- Anderson, J. R. (2007). How can the human mind occur in the physical universe? (Vol. 3). Oxford University Press.
- Anderson, J. R., and Lebiere, C. (2003). The Newell test for a theory of cognition. Behavioral and brain Sciences, 26(5), 587-601.
- Ayoub, J., Zhou, F., Bao, S., and Yang, X. J. (2019). From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 70-90).
- Bainbridge, L. (1983). Ironies of automation. In Analysis, design and evaluation of man–machine systems (pp. 129-135). Pergamon.
- Bengler, K., Dietmayer, K., Farber, B., Maurer, M., Stiller, C., and Winner, H. (2014). Three decades of driver assistance systems: Review and future perspectives. IEEE Intelligent Transportation Systems Magazine, 6(4), 6–22.
- Brackstone, M., and McDonald, M. (1999). Car-following: a historical review. Transportation Research Part F: Traffic Psychology and Behaviour, 2(4), 181-196.
- Brooks, R. A. (1991). Intelligence without representation. Artificial intelligence, 47(1-3), 139-159.
- Brumby, D. P., Janssen, C. P., Kujala, T., and Salvucci, D. D. (2018). Computational models of user multitasking. Computational interaction design, 341-362.
- Card, S. K., Moran, T., and Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: L. Erlbaum Associates Inc.
- Darpa (2020) The Grand Challenge. Accessed online on July 6 2020 at https://www.darpa.mil/about-us/timeline/-grand-challenge-for-autonomous-vehicles
- Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. Oxford University Press.
- European Commission (2018, 17 May). On the road to automated mobility: An EU strategy for mobility of the future (pp. 1–17). Brussels, BE. Communication COM(2018) 283 final.
- European Commission (2020, 19 February). Shaping Europe’s digital future. Brussels (BE). Communication COM(2020) 67 final.
- Favarò, F. M. (2020). Unsettled Issues Concerning Semi-Automated Vehicles: Safety and Human Interactions on the Road to Full Autonomy. Technical report for the SAE. Warrendale, PA: SAE International. Retrieved from https://doi.org/10.4271/EPR2020001
- Gray, W. D. (Ed.). (2007). Integrated models of cognitive systems (Vol. 1). Oxford University Press.
- Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Cambridge, MA: MIT press.
- Helbing, D. (2001). Traffic and related self-driven many-particle systems. Reviews of modern physics, 73(4), 1067.
- Hock, P., Kraus, J., Babel, F., Walch, M., Rukzio, E., and Baumann, M. (2018). How to design valid simulator studies for investigating user experience in automated driving – Review and hands-on considerations. Proceedings of the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, 105–117. New York, NY: ACM Press
- Janssen, C. P., Boyle, L. N., Kun, A. L., Ju, W., and Chuang, L. L. (2019). A Hidden Markov Framework to Capture Human–Machine Interaction in Automated Vehicles. International Journal of Human-Computer Interaction, 35(11), 947–955.
- Janssen, C. P., Boyle, L. N., Ju, W., Riener, A., and Alvarez, I. (2020). Agents, environments, scenarios: A framework for examining models and simulations of human-vehicle interaction. Transportation research interdisciplinary perspectives, 8, 100214.
- Janssen, C. P., Iqbal, S. T., Kun, A. L., and Donker, S. F. (2019). Interrupted by my car? Implications of interruption and interleaving research for automated vehicles. International Journal of Human-Computer Studies, 130, 221–233.
- Janssen, C. P., and Kun, A. L. (2020). Automated driving: getting and keeping the human in the loop. Interactions, 27(2), 62-65.
- Jeon, M., Zhang, Y., Jeong, H., P. Janssen, C.P., and Bao, S. (2021). Computational Modeling of Driving Behaviors: Challenges and Approaches. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 160-163).
- Jokinen, J.P.P., Kujala, T., and Oulasvirta, A. (2021) Multitasking in Driving as Optimal Adaptation under Uncertainty. Human Factors, , 63(8), 1324-1341.
- Kangasrääsiö, A., Jokinen, J. P., Oulasvirta, A., Howes, A., and Kaski, S. (2019). Parameter inference for computational cognitive models with Approximate Bayesian Computation. Cognitive Science, 43(6), e12738.
- Kieras, D. (2012). Model-based evaluation. In: Jacko and Sears (Eds.) The Human-Computer Interaction Handbook (3rd edition), 1294-310. Taylor and Francis
- Kun, A. L. (2018). Human-Machine Interaction for Vehicles: Review and Outlook. Foundations and Trends in Human-Computer Interaction, 11(4), 201–293.
- Kun, A. L., Boll, S., and Schmidt, A. (2016). Shifting Gears: User Interfaces in the Age of Autonomous Vehicles. IEEE Pervasive Computing, 32–38.
- Levine, S. (2018). Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909.
- Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: W.A. Freeman.
- McClelland, J. L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science, 1(1), 11-38.
- Mogelmose, A., Trivedi, M. M., and Moeslund, T. B. (2012). Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. IEEE Transactions on Intelligent Transportation Systems, 13(4), 1484-1497.
- Newell, A. (1973). You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium.In Chase (ed.) Visual Information Processing. New York: Academic Press.
- Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
- Newell, A., and Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall
- Noy, I. Y., Shinar, D., and Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety science, 102, 68-78.
- Oulasvirta, A. (2019). It’s time to rediscover HCI models. Interactions, 26(4), 52-56.
- Oulasvirta, A., Bi, X., Kristensson, P-O., and Howes, A., (Eds.) (2018). Computational Interaction. Oxford University Press
- Peebles, D., and Cooper, R. P. (2015). Thirty years after Marr’s vision: levels of analysis in cognitive science. Topics in cognitive science, 7(2), 187-190.
- Pfeifer, R., and Scheier, C. (2001). Understanding intelligence. Cambridge, MA: MIT press.
- Russell, S., and Norvig, P. (2002). Artificial intelligence: a modern approach. Uppersaddle River, NJ: Pearson
- SAE International. (2014). J3016: Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. Warrendale, PA, USA: SAE International
- Salvucci, D. D., and Taatgen, N. A. (2011). The multitasking mind. Oxford University Press.
- Sarter, N. B., and Woods, D. D. (1995). How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Human factors, 37(1), 5-19.
- Sutton, R., and Barto, A. G. (2018). Reinforcement learning: An introduction. Cambridge, MA: MIT Press
- Walch, M., Sieber, T., Hock, P., Baumann, M., and Weber, M. (2016). Towards cooperative driving: Involving the driver in an autonomous vehicle’s decision making. In Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 261–268. New York, NY: ACM Press.
- Walch, M., Mühl, K., Kraus, J., Stoll, T., Baumann, M., and Weber, M. (2017). From Car-Driver-Handovers to Cooperative Interfaces: Visions for Driver–Vehicle Interaction in Automated Driving. In G. Meixner and C. Müller (Eds.): Automotive User Interfaces: Creating Interactive Experiences in the Car (pp. 273–294). Springer International Publishing.
- Wintersberger, P., Schartmüller, C., and Riener, A. (2019). Attentive User Interfaces to Improve Multitasking and Take-Over Performance in Automated Driving: The Auto-Net of Things. International Journal of Mobile Human Computer Interaction, 11(3), 40-58.
- Yan, F., Eilers, M., Weber, L., and Baumann, M. (2019). Investigating Initial Driver Intention on Overtaking on Rural Roads. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 4354-4359). IEEE.
The capabilities of automated vehicles are rapidly increasing, and are changing human interaction considerably. Despite this technological progress, the path to fully self-driving vehicles without any human intervention is long, and for the foreseeable future human interaction is still needed with automated vehicles. The principles of human-automation interaction also guide the future outlook of the European Commission.
Human-automated vehicle interaction can take at least two forms. One form is a partnership, in which the human and the automated vehicle both contribute in parallel to the control of the vehicle. Another form is in transitions of control, where the automated system at times takes over full control of the vehicle, but transitions control back to the human when desired by the human, or when required due to system limitations. For both the partnership and the transition paradigm it is beneficial when the car and the human have a good model of each other’s capabilities and limitations. Accurate models can make clear how tasks are distributed between the human and the machine. This helps avoid misunderstandings, or mode confusion, and thereby reduces the likelihood of accidents and incidents.
A key tool in this regard is the use of computational (cognitive) models: computational instantiations that simulate the human thought process and/or their interaction with an automated vehicle. Computational models build on a long tradition in cognitive science, human factors and human-computer interaction, neuroscience, and AI and engineering. By now, there are a wide set of varieties that can be applied to different domains, ranging from constrained theoretical problems to capturing real-world interaction. Computational models have many benefits, ranging from enforcing a working ethic of "understanding by building" to testing "what if" scenarios. For human-automated vehicle interaction in particular, it allows testing of future adaptive systems that are not yet on the road.
Automated driving is a domain where computational models can be applied. Three approaches have only started to scratch the surface. First, the large majority of models focus on engineering aspects (e.g., computer vision, sensing the environment, flow of traffic) that do not consider the human extensively. Second, models that focus on the human mostly capture manual, non-automated driving. Third, models about human interaction in automated vehicles are either conceptual or qualitative, and do not benefit from the full set of advantages that computational models offer.
In summary, there is a disconnect between the power and capabilities that computational models offer for the domain of automated driving, and today’s state-of-the-art research. This is due to a set of broad challenges that the field is facing and that need to be tackled over the next 3 to 10 years. These challenges include the following:
- Challenge 1: What phenomena and driving scenarios need to be captured?
- Challenge 2: What technical capabilities do computational models possess?
- Challenge 3: How can models benefit from advances in AI while avoiding pitfalls?
- Challenge 4: What insights are needed for empirical research?
- Challenge 5: How can models inform design and governmental policy?
The aim of this Dagstuhl Seminar is to further identify and specify the methods and challenges that the field has, to inform a roadmap for research to solve them. We look forward to work with top researchers and practitioners from academia, industry, and government on this exciting field.
- Martin Baumann (Universität Ulm, DE) [dblp]
- Jelmer Borst (University of Groningen, NL)
- Alexandra Bremers (Cornell Tech - New York, US)
- Duncan Brumby (University College London, GB) [dblp]
- Debargha Dey (TU Eindhoven, NL) [dblp]
- Patrick Ebel (Universität Köln, DE)
- Martin Fränzle (Universität Oldenburg, DE) [dblp]
- Luisa Heinrich (Universität Ulm, DE)
- Moritz Held (Universität Oldenburg, DE)
- Jussi Jokinen (University of Jyväskylä, FI)
- Dietrich Manstetten (Robert Bosch GmbH - Stuttgart, DE)
- Gustav Markkula (University of Leeds, GB)
- Roderick Murray-Smith (University of Glasgow, GB) [dblp]
- Antti Oulasvirta (Aalto University, FI) [dblp]
- Nele Rußwinkel (TU Berlin, DE) [dblp]
- Shadan Sadeghian (Universität Siegen, DE) [dblp]
- Hatice Sahin (Universität Oldenburg, DE)
- Philipp Wintersberger (TU Wien, AT) [dblp]
- Fei Yan (Universität Ulm, DE)
- Linda Ng Boyle (University of Washington - Seattle, US) [dblp]
- Lewis Chuang (LMU München, DE) [dblp]
- Benjamin Cowan (University College - Dublin, IE) [dblp]
- Birsen Donmez (University of Toronto, CA) [dblp]
- Justin Edwards (ADAPT Centre - Dublin, IE)
- Mark Eilers (Humatects - Oldenburg, DE)
- Shamsi Tamara Iqbal (Microsoft - Redmond, US) [dblp]
- Christian P. Janssen (Utrecht University, NL) [dblp]
- Myounghoon Jeon (Virginia Polytechnic Institute - Blacksburg, US) [dblp]
- Xiaobei Jiang (Beijing Institute of Technology, CN)
- Wendy Ju (Cornell Tech - New York, US) [dblp]
- Tuomo Kujala (University of Jyväskylä, FI)
- Andrew L. Kun (University of New Hampshire - Durham, US) [dblp]
- Otto Lappi (University of Helsinki, FI)
- Nikolas Martelaro (Carnegie Mellon University - Pittsburgh, US)
- Andreas Riener (TH Ingolstadt, DE) [dblp]
- Boris van Waterschoot (Rijkswaterstaat - Utrecht, NL)
- Yiqi Zhang (Pennsylvania State University - University Park, US)
Verwandte Seminare
- Dagstuhl-Seminar 17232: Computational Interactivity (2017-06-05 - 2017-06-08) (Details)
Klassifikation
- Artificial Intelligence
- Human-Computer Interaction
- Machine Learning
Schlagworte
- human-automation interaction
- computational models
- automated vehicles