![]() ![]() Received: Accepted: SeptemPublished: October 5, 2017Ĭopyright: © 2017 Hambrook et al. PLoS ONE 12(10):Įditor: Blake Johnson, Australian Research Council Centre of Excellence in Cognition and its Disorders, AUSTRALIA We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.Ĭitation: Hambrook DA, Ilievski M, Mosadeghzad M, Tata M (2017) A Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cue. Our findings suggest that an “active hearing” approach could be useful in robotic systems that operate in natural, noisy settings. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. This has been thought to limit their usefulness at frequencies above about 1khz in humans. The neural computations for detecting interaural time difference (ITD) have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or “phantom” images in the scene. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. ![]()
0 Comments
Leave a Reply. |