Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 6
items per page: 25 50 75
Sort by:

Abstract

A method for precise sound sources detection and localization in interiors is presented. Acoustic vector sensors, which provide multichannel output signals of acoustic pressure and particle velocity were employed. Methods for detecting acoustic events are introduced. The algorithm for localizing sound events in the audience is presented. The system set up in a lecture hall, which serves as a demonstrator of the proposed technology, is described. The accurracy of the proposed method is evaluated by the described measurement results. The analysis of the results is followed by conclusions pertaining the usability of the proposed system. The concept of the multimodal audio-visual detection of events in the audience is also introduced.
Go to article

Abstract

The locally resonant sonic material (LRSM) is an artificial metamaterial that can block underwater sound. The low-frequency insulation performance of LRSM can be enhanced by coupling local resonance and Bragg scattering effects. However, such method is hard to be experimentally proven as the best optimizing method. Hence, this paper proposes a statistical optimization method, which first finds a group of optimal solutions of an object function by utilizing genetic algorithm multiple times, and then analyzes the distribution of the fitness and the Euclidean distance of the obtained solutions, in order to verify whether the result is the global optimum. By using this method, we obtain the global optimal solution of the low-frequency insulation of LRSM. By varying parameters of the optimum, it can be found that the optimized insulation performance of the LRSM is contributed by the coupling of local resonance with Bragg scattering effect, as well as a distinct impedance mismatch between the matrix of LRSM and the surrounding water. This indicates coupling different effects with impedance mismatches is the best method to enhance the low-frequency insulation performance of LRSM.
Go to article

Abstract

Available methods for room-related sound presentation are introduced and evaluated. A focus is put on the synthesis side rather than on complete transmission systems. Different methods are compared using common, though quite general criteria. The methods selected for comparison are: Intensity Stereophony after Blumlein, vector-base amplitude panning (VBAP), 5.1-Surround and its discrete-channel derivatives, synthesis with spherical harmonics (Ambisonics, HOA), synthesis based on the boundary method, namely, wave-field synthesis (WFS), and binaural-cue selection methods (e.g., DiRAC). While VBAP, 5.1-Surround and other discrete-channel-based methods show a number of practical advantages, they do, in the end, not aim at authentic sound-field reproduction. The so-called holophonic methods that do so, particularly, HOA and WFS, have specific advantages and disadvantages which will be discussed. Yet, both methods are under continuous development, and a decision in favor of one of them should be taken from a strictly application-oriented point of view by considering relevant application-specific advantages and disadvantages in detail.
Go to article

Abstract

This article presents results of investigations of the angle of directional hearing acuity (ADHA) as a measure of the spatial hearing ability with a special emphasis on people with hearing impairments. A modified method proposed by Zakrzewski has been used - ADHA values have been determined for 8 azimuths in the horizontal plane at the height of the listeners' head. The two-alternative-forced-choice method (2AFC), based on a new system of listeners' responses (left - right instead of no difference - difference in location of sound sources) was the procedure used in the experiment. Investigations were carried out for two groups of subjects: normal hearing people (9 persons) and hearing impaired people (sensorineural hearing loss and tinnitus - 9 persons). In the experiment different acoustic signals were used: sinusoidal signals (pure tones), 1/3 octave noise, amplitude modulated 1/3 octave noise, CCITT speech and traffic noises and signals corresponding to personal character of tinnitus for individual subjects. The results obtained in the investigations showed, in general, a better localization of the sound source for noise type signals than those for tonal signals. Inessential differences exist in ADHA values for particular signals between the two groups of subjects. On the other hand, significant differences for tinnitus signals and traffic noise signals were stated. A new system of listeners' responses was used and appeared efficient (less dispersion of results compared to the standard system).
Go to article

Abstract

The use of individualised Head Related Transfer Functions (HRTF) is a fundamental prerequisite for obtaining an accurate rendering of 3D spatialised sounds in virtual auditory environments. The HRTFs are transfer functions that define the acoustical basis of auditory perception of a sound source in space and are frequently used in virtual auditory displays to simulate free-field listening conditions. However, they depend on the anatomical characteristics of the human body and significantly vary among individuals, so that the use of the same dataset of HRTFs for all the users of a designed system will not offer the same level of auditory performance. This paper presents an alternative approach to the use on non-individualised HRTFs that is based on a procedural learning, training, and adaptation to altered auditory cues.We tested the sound localisation performance of nine sighted and visually impaired people, before and after a series of perceptual (auditory, visual, and haptic) feedback based training sessions. The results demonstrated that our subjects significantly improved their spatial hearing under altered listening conditions (such as the presentation of 3D binaural sounds synthesised from non-individualized HRTFs), the improvement being reflected into a higher localisation accuracy and a lower rate of front-back confusion errors.
Go to article

Abstract

Simultaneous perception of audio and visual stimuli often causes concealment or misrepresentation of information actually contained in these stimuli. Such effects are called the "image proximity effect" or the "ventriloquism effect" in the literature. Until recently, most research carried out to understand their nature was based on subjective assessments. The authors of this paper propose a methodology based on both subjective and objectively retrieved data. In this methodology, objective data reflect the screen areas that attract most attention. The data were collected and processed by an eye-gaze tracking system. To support the proposed methodology, two series of experiments were conducted - one with a commercial eye-gaze tracking system Tobii T60, and another with the Cyber-Eye system developed at the Multimedia Systems Department of the GdaƄsk University of Technology. In most cases, the visual-auditory stimuli were presented using a 3D video. It was found that the eye-gaze tracking system did objectivize the results of experiments. Moreover, the tests revealed a strong correlation between the localization of a visual stimulus on which a participant's gaze focused and the value of the "image proximity effect". It was also proved that gaze tracking may be useful in experiments which aim at evaluation of the proximity effect when presented visual stimuli are stereoscopic.
Go to article

This page uses 'cookies'. Learn more