![]() |
![]() |
Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023 Politecnico di Torino Torino, Italy September 11 - 15, 2023 |
|
Abstract In complex acoustic environments, spatial filtering offers a great potential for improving speech intelligibility with hearing devices. However, as the performance increases, knowledge of the user’s personal listening preferences and identification of the attended and ignored sources becomes critical. In this approach, the hearing-device user’s gaze and head movement behavior is set into the context of the current communication situation. Ideally, this would include knowledge of source positions, source types, but potentially also high-level features. Here, the context is provided by acoustic direction-of-arrival estimation. This way, the attended source can be identified from a mixture of sources in an audiovisual scene. Since the algorithm is driven by behavior, special care must be taken during its evaluation to ensure that user behavior is as ecologically valid as possible. This is achieved by establishing an interactive turn-taking conversation in virtual reality by representing remote interlocutors through their real-time animated avatars. The system provides access to isolated speech and noise signals, which allows for an instrumental evaluation, even in natural interactive turn-taking conversations. In addition, conversation success was analyzed. Results show that the proposed algorithm can provide a benefit in terms of SNR as well as conversational success. |
||||||||||||||||||||