S Ciba, K Helwani, H Wierstorf, K Obermayer, A Raake, S Spors, "Employing a binaural auditory model to classify everyday sound events," in Fortschritte der Akustik - DAGA 2012, p. 717-718 (2012). [ paper ]

Bibtex

@inproceedings{Ciba2012,
    title     = {Employing a binaural auditory model to classify everyday sound
                 events},
    author    = {Ciba, Simon and Helwani, Karim and Wierstorf, Hagen and
                 Obermayer, Klaus and Raake, Alexander and Spors, Sascha},
    booktitle = {Fortschritte der Akustik - DAGA 2012},
    publisher = {DEGA e.V.},
    address   = {Darmstadt, Germany},
    pages     = {717-718},
    month     = {March},
    year      = {2012}
}

Abstract

Humans benefit considerably from exploiting two ears in everyday listening tasks. It therefore seems to be a promising concept for machine listening approaches to emulate the biological mechanisms of binaural signal processing before applying methods of artificial intelligence. To this aim, research can draw from psychoacoustics and physiology which offer a substantial repertory of computational models mimicking parts of the human auditory system. In this paper we present an approach for automatic classification of elementary everyday sound events which is based on a pre-processing by a binaural auditory model. The relevant features are extracted from the model’s output data according to a heuristic scheme. Given a set of training data, a classifier is then constructed using support vector machine learning. The proposed method is validated in binary classification experiments performed on a taxonomically organized data base of natural sounds. Both discrimination and detection tasks are considered, yielding an average prediction accuracy of about 0.95 and 0.9, respectively. Moreover, we investigate the robustness of the classification against variation of room acoustics. By including in the learning process the acoustics underlying the prediction task, average accuracy decreases by 0.053 (discrimination) and 0.069 (detection) at most.