Dagstuhl-Seminar 16442
Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR)
( 30. Oct – 04. Nov, 2016 )
Permalink
Organisatoren
- Roger K. Moore (University of Sheffield, GB)
- Serge Thill (University of Skövde, SE)
- Clémentine Vignal (Université Jean Monnet - Saint-Étienne, FR)
Koordinator
- Ricard Marxer (University of Sheffield, GB)
Kontakt
- Susanne Bach-Bernhard (für administrative Fragen)
Impacts
- Proceedings of the 1st International Workshop on Vocal interactivity in-and-between Humans, Animals and Robots : VIHAR 2017 - Marxer, Ricard; Dassow, Angela; Moore, Roger K. - vihar.org, 2017. - VI, 52 pp.. ISBN: 978-2-9562029-0-5.
- Recording Vocal Interactivity among Turtles using AUVs : article in Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans : VIHAR 2017 : pp. 29-30 - Campbell, Nick; Dassow, Angela - http://vihar-2017.vihar.org/, 2017.
Programm
Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: from detecting predators/prey and marking territory, to expressing emotions, establishing social relations and sharing information. Whether it’s a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a businessperson accessing stock prices using Siri on an iPhone, vocalisation provides a valuable communications channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction.
Clearly, there is potential for cross-fertilisation between disciplines; for example, using robots to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of autonomous social agents. However, many opportunities remain unexplored, not least due to the lack of a suitable forum.
The aim of this seminar is to provide a unique and timely opportunity to bring together scientists and engineers from different fields to share theoretical insights, best practices, tools and methodologies, to identify common principles underpinning vocal behaviour, to enumerate open research questions, and to explore the potential for new collaborations and technologies, with a view to accelerating progress in all these areas.
Almost all animals exploit vocal signals for a range of ecologically-motivated purposes. For example, predators may use vocal cues to detect their prey (and vice versa), and a variety of animals (such as birds, frogs, dogs, wolves, foxes, jackals, coyotes, etc.) use vocalisation to mark or defend their territory. Social animals (including human beings) also use vocalisation to express emotions, to establish social relations and to share information, and humans beings have extended this behaviour to a very high level of sophistication through the evolution of speech and language - a phenomenon that appears to be unique in the animal kingdom, but which shares many characteristics with the communication systems of other animals.
Also, recent years have seen important developments in a range of technologies relating to vocalisation. For example, systems have been created to analyse and playback animals calls, to investigate how vocal signalling might evolve in communicative agents, and to interact with users of spoken language technology (voice-based human-computer interaction using speech technologies such as automatic speech recognition and text-to-speech synthesis). Indeed, the latter has witnessed huge commercial success in the past 10-20 years, particularly since the release of Naturally Speaking (Dragon's continuous speech dictation software for a PC) in 1997 and Siri (Apple's voice-operated personal assistant and knowledge navigator for the iPhone) in 2011. Research interest in this area is now beginning to focus on voice-enabling autonomous social agents (such as robots).
Therefore, whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a businessperson accessing stock prices using an automated voice service on their mobile phone, vocalisation provides a valuable communications channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired.
Indeed, the ubiquity of vocal interaction has given rise to a wealth of research across an extremely diverse array of fields from the behavioural and language sciences to engineering, technology and robotics. This means that there is huge potential for crossfertilisation between the different disciplines involved in the study and exploitation of vocal interactivity. For example, it might be possible to use contemporary advances in machine learning to analyse animal activity in different habitats, or to use robots to investigate contemporary theories of language grounding. Likewise, an understanding of animal vocal behaviour might inform how vocal expressivity might be integrated into the next generation of autonomous social agents. Some of these issues have already been addressed by relevant sub-sections of the research community. However, many opportunities remain unexplored, not least due to the lack of a suitable forum to bring the relevant people together.
Our Dagstuhl seminar on the topic of "Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR)" provided the unique and timely opportunity to bring together scientists and engineers from a number of different fields to appraise our current level of knowledge. Our broad aim was to focus discussion on the general principles of vocal interactivity as well as evaluating the state-of-the-art in our understanding of vocal interaction within-and-between humans, animals and robots. Some of these sub-topics, such as human spoken language or vocal interactivity between animals, have a long history of scientific research. Others, such as vocal interaction between robots or between robots and animals, are less well studied - mainly due to the relatively recent appearance of the relevant technology. What is interesting is that, independent of whether the sub-topics are well established fields or relatively new research domains, there is an abundance of open research questions which may benefit from a comparative interdisciplinary analysis of the type addressed in this seminar.
- Andrey Anikin (Lund University, SE)
- Timo Baumann (Universität Hamburg, DE) [dblp]
- Tony Belpaeme (University of Plymouth, GB) [dblp]
- Elodie Briefer (ETH Zürich, CH)
- Nick Campbell (Trinity College Dublin, IE) [dblp]
- Fred Cummins (University College Dublin, IE) [dblp]
- Angela Dassow (Carthage College - Kenosha, US)
- Robert Eklund (Linköping University, SE) [dblp]
- Julie E. Elie (University of California - Berkeley, US)
- Sabrina Engesser (Universität Zürich, CH)
- Sarah Hawkins (University of Cambridge, GB) [dblp]
- Ricard Marxer (University of Sheffield, GB) [dblp]
- Roger K. Moore (University of Sheffield, GB) [dblp]
- Julie Oswald (University of St Andrews, GB)
- Bhiksha Raj (Carnegie Mellon University - Pittsburgh, US) [dblp]
- Rita Singh (Carnegie Mellon University - Pittsburgh, US) [dblp]
- Dan Stowell (Queen Mary University of London, GB) [dblp]
- Zheng-Hua Tan (Aalborg University, DK) [dblp]
- Serge Thill (University of Skövde, SE) [dblp]
- Petra Wagner (Universität Bielefeld, DE) [dblp]
- Benjamin Weiss (TU Berlin, DE) [dblp]
Klassifikation
- artificial intelligence / robotics
- society / human-computer interaction
Schlagworte
- vocal interaction
- speech technology
- spoken language
- human-robot interaction
- animal calls
- vocal learning
- language universals
- language evolution
- vocal expression