Learning Center
Plans & pricing Sign in
Sign Out

Auditory displays


									 Auditory displays
“..the purpose of the ears is to point the eyes”
                 Georg von Békésy

            Carolina Figueroa
                University of Idaho
              Advanced Human Factors

   1. Ability to detect the direction of moving sounds at
    different locations in space
   2. Is simulation of virtual sound sources accurate?
   3. Acoustic and non-acoustic factors that contribute to
    distance perception of sound sources
   4. Effects on detection of signals elevation and
    position when the signal bandwidth is limited

Psychoacoustics tutorial :

                         - 1/4 -

   Minimum Audible Movement Angle (MAMA)
    as a Function of the Azimuth and Elevation of
    the Source

By Thomas Z. Strybel, Carol L. Manligas, David R. Perrott, 1992

Uses of auditory spatial information in head-
coupled display systems:

     Moving head and eyes to identify objects or events
        Reduction in visual search time (150 ms 10deg of central
         visual area)
        Greatest benefit when targets located at peripheral visual
     Enhance situational awareness of pilot
          Example: warning of a target when pilot is not looking

   How effective is the simulation of 3D auditory
     Auditory localization (accuracy of localization)
     Auditory spatial acuity (discrimination)

   Other studies have reported data on human
    auditory localization and spatial acuity but…
       static sounds located in the horizontal plane that
        intersects the interaural axis.
                     Other reports…
   Localization and acuity are best for sounds located:
       Directly in front of the subject (0 deg azimuth)
       Poorest in the area directly opposite each ear (+/- 90 deg

    Stevens and Newman (1936)                    Mills(1972)

Localization error   Deg azimuth            MAA           Deg azimuth
     4.6 deg            0 deg               1 deg              0 deg
     16 deg             90 deg              40 deg             90 deg

    Spatial Acuity: assessed by minimum audible angle (MAA)
                 This investigation…
   .. Uses dynamic acoustic events…
        Acoustic events are likely to be moving
        The pilot’s head would be free to move
        Aircraft would be moving too

   Factors taken into account…
        Variations in azimuth (right / left)
        Variations in elevation (up / down)

   MAMA: minimum angle of travel required for detection of the direction of
    movement of a sound source (Perrot and Tucker 1988)
   MAA: Static auditory spatial acuity
         Only two investigations…
   Grantham, 1986
       One stimulus
       Tested on large number of source positions

   Harris & Sergeant, 1971
       Several stimulus
       On two source positions

   Findings : Ability to localize position of a sound source
    under dynamic conditions is limited …

   MAMA’s measured
     +/- 80 deg azimuth
     Elevation between 0 and 87.5 deg

   Subjects
       5 experienced subjects on MAMA tasks
   Method
     Test conducted in a large audiometric test chamber
     Subjects were seated in the center of the room.
     Subject’s head position was fixed

   At 0 deg elevation loudspeaker mounted on the
    horizontal boom
   Above elevations loudspeaker mounted vertically
   MAMA measured at a total of 16 azimuth-elevation
   Constant velocity of moving source 20 deg/s
   Testing in the dark
   Adaptive psychophysical use to measured MAMA’s

   Findings agree with previous investigations on dynamic
    and static localization in the horizontal plane, which
    indicate that performance is best at 0 deg azimuth and
    is degraded at peripheral source locations.

   Results indicate that pilot’s awareness of moving events
    in head-coupled displays systems depend on:
       Location of the event
       Velocity of events (in and out)

Additional data is required in order to effectively
 detect moving sounds:
    MAMA measured at locations below the horizontal
    Velocity in peripheral locations

    Actual Moving sounds vs. simulations of moving

                         - 2/4 -

   Fidelity of Three Dimensional-Sound
    Reproduction Using a Virtual Auditory Display

     By Erno H. A Langedik and Adelbert W. Bronkhorst 2000

   Is the simulation of virtual sound sources accurate?
   Do the real and virtual sound sources generate identical
   How to test it? Direct A/B comparison
       Problem:
            use of headphone for virtual source, no headphones for real source
            HRTF’s and HPTF’s affect by removing/replacing of headphones
   Solution: open transducer (used by Zahorik 1996, Hartmann and
    Wittenber 1996)

First Experiment: is the simulation of virtual sound sources
sufficiently accurate to give same auditory percept as the real one?

Zahorik                    Hartmann & Wittemberg   Langendijk & Bronkhorst

Large headphones           Large headphones        Smaller headphones

Different filter lengths   Limited bandwidth       500Hz – 16kHz (high
2AFC paradigm              Yes/No paradigm         2AFC, Yes/No and
AB, BA                                             Oddball
                                                   ABAA, BABB, AABA,

(A=real, B=virtual)
    1) Virtual sound-source generation
   Subjects
       6 experienced in sound localization listeners
       Headband with microphone and earphone
       Listener seated with his head in the center
        of the loudspeaker arc
       HRTF measurements carried out for all
        speaker positions, then headphones were
        position and HRTF were measured again
        (investigate influence of the headphones)
       Acoustical verification measurements were
        carried out to verify accuracy of wave forms

    1) Virtual sound-source generation
   Design
       6 loudspeaker positions (azimuth, elevation):
            (-90,-60) (180,-30) (-90, 0) (30,0) (90,30) (0,60)
       3 paradigms
          Yes/No
          2AFC: AB or BA (where A is real and B virtual sound)

          oddball design presented in the 2 or 3rd interval giving 4
           possible orders: ABAA, BABB, AABA, BBAB
       50 trials per paradigm

    1) Virtual sound-source generation
   Results
   Headphone effect on HRTFs
       Amplitude differences between HRTF’s with and
        without headphones infront of the ears are within 5
        dB for most frequencies.
       Above 1kHZ frequencies peaks and valleys of the
        HRTF’s with/without headphone occurred in the
        same frequencies.
       Headphones had no effect on the interaural
        difference in arrival time of the signal

       Left ear = Upper solid line
       Right ear = lower solid line
       HRTF with headphones = solid line
       HRTF without headphones = broken lines

    1) Virtual sound-source generation
   Results cont’d..
   Acoustical differences:
 Differences between real (broken lines)
and virtual (solid lines) were very small
HRTFs for left – lower line
HRTF;s for right – upper line

  Psychophysical validation results:
Yes/no and 2AFC paradigms not significantly
Different from chance where it was for the oddball

  1) Virtual sound-source generation
 It is possible to accurately reproduce 3-field acoustic
  waveforms at eardrum.
 The virtual sound source practically indistinguishable
  from an actual loudspeaker in the free field
 These findings are in agreement with previous studies
  even though there are many methodological differences
  (such as size of headphones,: Zahorik 1995, Hartmann
  and Wittenberg 1996)

2) Measured vs. Interpolated HRTFs
Second experiment: generation of            Wenzel & foster Langendijk &
   virtual sound sources at positions       (1993)          Bronkhorst
   for which actual HRFTs are not
   measured.                                Non-individual        Individual HRTF’s
HRTF’s measured for limited source                                3 spatial resolutions
  positions with specific algorithms                              (5.6, 11.3, 22.5 deg)
  implemented allowing continuous
  interpolation between positions.                                Linear interpolation
                                                                  for interpolated
                                                                  HRTF’s *
How well the intermediate positions
  are simulated?
                                                                  Oddball paradigm

                                        * 2 measured HRTF with different azimuth and elevation
                                        4 measured HRTF’s different in both dimensions.
          2) Measured and interpolated
   Subjects: 6 listeners with normal hearing (half experienced)
   Design:
        104 loudspeaker positions
        25 different sounds (200 Hz – 16 kHz)
        Filter long enough to include acoustical effects of the body, head, and ears of
        3 independent variables: grid resolution: 5, 6 or 11 deg aprox. direction of
         interpolation: horizontal, vertical or diagonal; amplitude of simtulus
        8 Target positions: (0,90) (45,45) (135,45) (0,0) (90,0) (180,0) (45, -45) and (135,-
        Conditions and target positions pooled in 6 blocks
              12 trials, 1 interpolation direction, 4 target positions, 2 scrambling cond. And 3 grid
        Oddball paradigm used: A=measured HRTF, B=interpolated HRTF
              ABAA, BABB, AABA, BBAB
        Subjects had to detect the interval with the oddball

        2) Measured and interpolated
   Results…
       Same auditory percept for Measured and Interpolated
        HRTF’s when spatial resolution is approx. 6 deg
       Acoustical differences = between 1.5 and 2.5 dB ok
       But > 2.5 dB affect position of the source -> other type of
        interpolation, cubic spline
       HRTF for contralateral ear are more that 15dB below
        amplitudes for the ipsilateral HRTF for frequencies above
        3kHz (different findings from Enzel and Foster 1993,
        “sounds localization no affected by interpolation”, they used
        non-individual HRTF causing large impact on the localization
        accuracy that was obscured by interpolation effect)

                        - 3/4 -

   Auditory Display of Sound Source Distance
              By Pavel Zahorik, 2002

        How to reproduce sound source distance:
         acoustical and non-acoustical factors that
             contribute to perceive distance.

   Psychophysical research focused on:
     Source direction (horizontal, vertical)
     But not in 3rd dimension: source distance

       This article describes how acoustical and non-
        acoustical factors contribute to the perception of
        sound distance.

     Acoustical                         Non-acoustical

      Intensity          Vision ( - “ventriloquism effect”, + improve
                           accuracy multiple targets, - coincidence)
Direct-to-Reverberant    Perceptual organization affect visual distance
    Energy Ratio              and probably auditory distance too
     Spectrum           Listener familiarity with particular source signal

Binaural Differences

   Estimates of perceived distance
       How far away does the sound appear? Meters, walking
        towards auditory target, magnitude estimation, paired-
        comparison – different methods same results.
       Experiments of distance judgments demonstrate the
        existence of 2 aspects of distance perception:
            Estimate bias
                  Close source distances are overestimated and far distances are often
                   substantially underestimated.
            Estimate variability
                  The majority of estimate variability is due to perceptual blur
                  Distance estimates to visual targets are less variable and highly accurate

   Correlation between variability and biased estimation distance
    and acoustic factors
       Exception: direct-to-reverberant energy ratio (one stimulus presentation)
       The other acoustic factors: combination of multiple acoustic factors
   Multiple acoustic factors
       Intensity and direct-to-reverberant more reliable cues, but not others
   Framework that combines individual distance cues
       Consisted cues are “trusted” = high perceptual weight
       Unavailable or unreliable cues = less perceptual weight
       Final distance perfect = sum of estimates from individual cues
       Producing stable estimates of source distances under a wide range of
        acoustic conditions

   Distance localization performance is not degraded through the
    use of non-individualized HRTF’s
       Surprising given the known performance-degrading effects on non-
        individualized HRTF’s on directional localization.

   Implications for auditory display
       Spatial auditory displays should provide consistent changes in intensity
        and direct-to-reverberant cues (not only just one cue) just as real
       Realistic simulation of distance does not require that the display be
        tailored to acoustics of individual users’ head and ears (not necessary
        individualized HRTF). Non-individualized HRTFs can have a
        negatively impact on directional localization accuracy

   Incorporate a visual component since vision facilitates distance perception
   Use of familiar sound source signals
   Level of accuracy should be comparable to real-world situation (developers
    shouldn’t be overly optimistic in terms of perceived distance accuracy)

                           - 4/4 -
   The Impact of Signal Bandwidth on Auditory Localization:
    Implications for the Design of Three-Dimensional Audio
                By Robert B. King, Simon R. Oldfield

>what are the effects of limited signals bandwidth on sound
  localization in military 3D displays where most military aircraft
  have communication systems that are band-limited in frequency

   Researchers seeking to implement synthesized 3D audio displays
    (in military environment) are investigating a number of design
       1. Whether digital filters based on one generalized HRTFs will allow
        accurate spatial synthesis for all individuals
       2. Intelligibility and localizability of speech signals in 3D auditory displays
       3. Masking and release masking effects associated with spatially encoded
       4. Effects on limited stimulus bandwidth on spatial synthesis

   HRTF’s:
       Preserve pattern of differences in time, and intensity cues that occur
        between ears.
       Preserve spectral modifications on the incident waveform by head, torso,
        pinnae before it reaches the basilar membrane
            Spectral features occurred frequencies 4kHz – 16 kHz
       Captures these differences and spectral modifications = code auditory
        signal’s position in space

   Military aircraft have communication systems that are band-
    limited in frequency response
       Low pass filters, pass frequencies of up to 4 to 6 kHz

   Signal’s elevation and position (front/back) are coded by
    frequency between 4 – 16 kHz

   Band limiting a signal could affect accuracy with which listeners
    can localize the elevation or position of this signal.

   Low-pass filtering with progressive removal of the high
    frequency content of signal
   High-pass … removal of low frequency…
   3 subjects were presented with signal progressively low-pass
    filtered from low-pass 16 to low-pass 1kHz and progressively
    high-pass filtered from high-pass 14kHz to 1 kHz high-pass
   9 elevations and 4 azimuths
   Speaker situated in front and behind the participant


To top