Research Evidence for Dynamic Soundscape Processing Benefits
2019-09-19 Matthias Froehlich, Eric Branda, Katja Freels
Modern hearing aids are very effective at restoring audibility. Signal processing also has progressed to the point that for some listening-in-noise sconditions, speech understanding for individuals with hearing loss is equal to or better than their peers with normal hearing.1 It is no secret, however, that an important component of the overall hearing experience is the listener’s intent—what is the desired acoustic focus. At a noisy party, for example, we can focus our attention on a person in a different conversation group, to “listen-in” on what he or she is saying. While driving a car, we can divert our attention from the music playing to focus on a talker from the back seat. Our listening intentions often are different in quiet vs. noise, when we are outside vs. in our homes, or when we are moving vs. when we are still. As hearing technology improves, efforts continue to be made to automatically achieve the best possible match between the brain’s intentions and the hearing aid’s processing.
Siemens/Signia has led the way in the last two decades, implementing platforms which automatically steer the hearing aid processing relative to a specific listening situation:
In the early 2000s, introduced instruments that automatically switched between omnidirectional and directional processing.2
Developed automatic adaptive polar patterns allowing the null to track a moving noise source.3
Introduced automatic directional focus to the back and to the sides.4,5
Most recently, narrow directionality using bilateral beamforming6.
Again, all of these features were developed to match the hearing aid user’s probable intent for a given listening situation. So what is left to do?
New Signal Processing
One area of interest centers on improving the importance functions given to speech and other environment sounds when originating from azimuths other than the front of the user, particularly when background noise is present. In general, identification and interpretation of the acoustic scene. To address this issue, an enhanced signal classification system recently was developed for the new Signia Xperience hearing aids. This approach considers such factors as overall noise floor, distance estimates for speech, noise and environmental sounds, signal-to-noise ratios, azimuth of speech, and ambient modulations in the acoustic soundscape.
A second addition to the processing of the Xperience product, again to hopefully mimic the intent of the hearing aid user, was to include motion sensors to assist in the signal classification process, leading to a combined classification system named “Acoustic-Motion Sensors”. The acceleration sensors conduct three-dimensional measurements every 0.5 milliseconds. The post-processing of the raw sensor data occurs every 50 milliseconds and is in turn used to control the hearing aid processing. In nearly all cases, when we are moving, our listening intentions are different than when we are still—we have an increased interest in what is all around rather than a specific focus on a sound source. Using these motion sensors, the processing of Xperience is effectively adapted when movement is detected.
To evaluate the patient benefit of these new processing features, two research studies were conducted. One to evaluate the efficacy of the algorithms in laboratory testing, and a second to determine the real-world effectiveness using ecological momentary assessment (EMA).
Laboratory Assessment of Acoustic-Motion Sensors
The participants were 13 individuals with bilateral, symmetrical downward-sloping mild-to-moderate hearing loss (6 males, 7 females) ranging in age from 26 to 82 (mean age 60). All were experienced users of bilateral amplification and their mean hearing loss was 30 dB at 250Hz, sloping to 64 dB at 6000 Hz.
The participants were fitted bilaterally with two different sets of Signia Pure RIC hearing aids, which were identical except that one set had the new acoustic scene classification algorithm as well as motion sensors. The hearing aids were programmed to the Signia fitting algorithm using Connexx 9.1 software, and fitted with double domes.
The participants were tested in two different listening situations. For both situations, ratings were conducted on 13-point scales ranging from 1=Strongly Disagree to 7=Strongly Agree, including mid-point ratings. The ratings were based on two statements related to different dimensions of listening: Speech understanding—“I understood the speaker(s) from the side well.”—and listening effort—“It was easy to understand the speaker(s) from the side.”
Scenario #1 (Restaurant Condition): This scenario was designed to simulate the situation when a hearing aid user is engaged in a conversation with a person directly in front, and unexpectedly, a second conversation partner, who is outside the field of vision, enters the conversation. This is something that might be experienced at a restaurant when a server approaches. The target conversational speech was presented from 0° degree azimuth (female talker; 68 dBA) and the background cafeteria noise (64 dBA) was presented from four speakers surrounding the listener (45°, 135°, 225° and 315°). The unexpected male talker (68 dBA) was presented randomly, originating from a speaker at 110°. The participants were tested with the two sets of instruments (i.e., new processing on vs. off). After each series of speech signals from the off-axis conversational partner, the participants rated their agreement using the scale described earlier.
Scenario #2 (Traffic Condition): This scenario was designed to simulate the situation when a person is walking on a sidewalk on a busy street with traffic noise (65 dBA), with a conversation partner on each side. The azimuths of the traffic noise speakers were the same as for Scenario #1, and for this testing, the motion sensor was either on or off (although the participant was seated, the motion sensor was activated to respond as if the participant were moving for the test condition). The participant faced the 0° speaker, with the speech from the conversational partners coming from 110° (male talker) and 250° (female talker) at 68 dBA. The rating statements and response scales were the same as used for Scenario #1.
Restaurant Condition: Participants had little trouble understanding the conversation from the front, with median ratings of 6.5 (maximum=7.0) for both instruments. There was no significant difference between the two types of processing (p>.05) for this talker from the front. For the talker from the side, however, there was a significant advantage (p<.05) for “new processing on” for speech understanding, and also for ease of listening. See Figure 1 for mean data.
The mean results for the traffic scenario are shown in Figure 2. Recall that in this case, the participant was surrounded by traffic noise (SNR=+3 dB) and had conversation partners on either side (110° and 250°). And again, speech understanding and listening effort were rated. This listening situation was somewhat more difficult, and therefore, overall mean ratings were slightly below the restaurant condition, but the same general pattern emerged. That is, when the new signal classification strategies were implemented, performance was significantly better (p<.05) for both speech understanding and listening effort.
For each condition, the participants were also asked if they would recommend the product that was just tested to a friend. A 5-point rating scale was used: 1=Definitely No, 3=Neutral, and 5= Definitely Yes. A significant advantage (p<.05) was observed for both “new processing on” and “motion sensors on.” If we observe individual ratings of “Yes” and “Definitely Yes”, we find positive recommendation ratings of 100% for the restaurant condition (vs. 53% for processing “off”), and 84% for the traffic condition (vs. 38% for motion sensors “off).
While the positive findings from the laboratory data for the new types of processing were encouraging, it was important to determine if these patient benefits extend to real-world hearing aid use. A second study, therefore, was conducted with the Xperience product involving a home trial.
The 35 participants (19 males, 16 females) in the study all had bilateral symmetrical downward-sloping hearing losses and were experienced users of hearing aids (average experience was 6 years). Their mean audiogram ranged from 29 dB at 500 Hz sloping to 62 dB at 4000 Hz. The participants, recruited from four different hearing aid dispensing offices, ranged in age from 37 to 86 years, with a mean age of 68.5 years.
The participants were fitted bilaterally with Signia Xperience Pure 312 7X RIC instruments, with vented click-sleeve ear coupling. The hearing aids were programmed to the patient’s hearing loss using the Signia Xperience fitting rationale.
The participants rated their hearing aid experience during the one-week field trial using ecological momentary assessment (EMA). That is, during or immediately after a real-world listening experience, ratings for that experience were conducted. The EMA app linked the participants’ smart phone to the Signia hearing aids and logged responses during the time that participants were answering a questionnaire. The primary EMA questions covered seven different listening environments, the actions of the user (still or moving), and the users’ perceptions for the situation. The participants were trained on using the app prior to the home trial.
For the analyses, questionnaires that were only started, or not fully competed were eliminated, resulting in 1,938 EMAs used for the findings reported here (average of 55/participant for the week-long trial). As discussed earlier, one of the primary new features of Xperience is the motion sensors that are integrated into the hearing aids. To evaluate the effectiveness of this feature, EMAs were examined for three different speech understanding in background noise conditions, when the participants reported that they were moving: noise in the home (136 EMAs), noise inside a building (153 EMAs), and noise outside (31 EMAs). The participants rated their ability to understand in these situations on a 9-point scale, ranging from 1=Nothing, 5=Sufficient to 9=Everything. We could assume that even a rating of #5 (“Sufficient”) would be adequate for following a conversation, but for the values shown in Figure 3, we combined the ratings of #6 (Rather Much) and higher. As would be expected, the understanding ratings for in the home were the highest, but for all three of these difficult listening situations—understanding speech in background noise while walking—overall understanding was good. The highest rating of “Understand Everything” on the 9-point scale was given for 60% of the EMAs for home, 62% for inside a building, and 39% for outside.
A common listening situation that occurs while moving is having a conversation while walking down a busy street. For this condition, three EMA questions were central: Is the listening situation natural? Is the acoustic scene perception appropriate? What is the overall satisfaction for speech understanding? The first two of these were rated on a four-point scale: Yes, Rather Yes, Rather No, and No. Satisfaction for speech understanding was rated on a 7-point scale similar to that used in MarkeTrak surveys: 1= Very Dissatisfied to 7=Very Satisfied. The results for these three questions for the walking on a busy street with background noise condition are shown in Figure 4. Percentages are either percent of Yes/Mostly Yes answers, or percent of EMAs showing satisfaction (a rating of #5 or higher on the 7-point scale). As shown, in all cases, the ratings were very positive. Perhaps most notable was that 88% of the EMAs reported satisfaction for speech understanding for this difficult listening situation.
As discussed earlier, in addition to the motion sensors, there also was a new signal classification and processing system developed for the Xperience platform (Dynamic Soundscape Processing), with the primary goal of improving speech understanding from varying azimuths together with ambient awareness. Several of the EMA questions were geared to these types of listening experiences.
The participants rated satisfaction on a 7-point scale, the same as commonly has been used for EuroTrak and MarkeTrak. If we take the most difficult listening—understanding speech in background noise—the EMA data revealed satisfaction of 92% for Xperience. We can compare this to other large-scale studies. The EuroTrak satisfaction data for this listening category differs somewhat from country to country, but in all cases, falls significantly below Xperience. For example, the 2019 Norway data reveals only 51% satisfaction, the 2018 Germany satisfaction rate was 64%, and the 2018 UK satisfaction was 69%.
The findings of MarkeTrak10 recently became available, and it is therefore possible to compare the survey results with Xperience to these survey findings. The MarkeTrak10 data that were used for comparison here were from individuals who were using hearing aids that were only 1 year old or newer. While the EMA questions were not worded exactly like the questions on the MarkeTrak10 survey, they were very similar and therefore provide a meaningful comparison. Shown in Figure 5 are the percent of satisfaction (combined ratings for Somewhat Satisfied, Satisfied, and Very Satisfied) for overall satisfaction and for three different common listening situations. We did not have EMA questions differentiating small groups from large groups, but MarkeTrak10 does. The MarkeTrak10 findings were 83% satisfaction for small groups and 77% for large groups. What is shown for MarkeTrak for this listening situation on Figure 5 is 80%, an average of the two group findings. In general, satisfaction ratings for Xperience were very high, and exceeded those from MarkeTrak10, even when comparing to the rather strong baseline for hearing aids that were less than 1 year old and even though most of the EMA questions were answered in situations with noise.
Summary and Conclusions
As technology advances, we continue to design hearing aid technology that more closely resembles the listening intent of the user. This might involve focus on speech other than that which is in front, enhanced ambient awareness, and also the specific listening needs when the hearing aid user is moving. As reported here, the Signia Xperience provides very encouraging results in all of these areas. Laboratory data shows significantly better speech understanding for speech from the sides, both when stationary and when moving. Real-world studies using EMA methodology revealed highly satisfactory environmental awareness, and higher overall user satisfaction ratings than have been obtained for either EuroTrak, or the recent MarkeTrak10. Overall, for both efficacy and effectiveness, the performance of the Signia Xperience hearing aids was validated, and increased patient benefit and satisfaction is expected to follow.
Froehlich, M., Freels, K., & Powers, T. Speech recognition benefit obtained from binaural beamforming hearing aids: comparison to omnidirectional and individuals with normal hearing AudiologyOnline, 2015; Article 14338. Retrieved from http://www.audiologyonline.com
Powers TA, Hamacher, V. Three-microphone instrument is designed to extend benefits of directionality. Hearing Journal. 2002; 55 (10): 38-45.
Ricketts T, Hornsby BY, Johnson EE. Adaptive Directional Benefit in the Near Field: Competing Sound Angle and Level Effects. Seminars in Hearing 2005; 26 (2): 59-69.
Mueller HG, Weber J, Bellanova M. Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. Inernational Journal of Audiology. 2011; 50(4):249-54
Chalupper J, Wu Y, Weber J. New algorithm automatically adjusts directional system for special situations. Hearing Journal. 2011; 64(1): 26-33.
Herbig, R, Froehlich, M. Binaural Beamforming: The Natural Evolution. Hearing Review. 2015;22(5):24.
Matthias Fröhlich, Ph.D.
Dr. Matthias Froehlich is global audiology strategy expert for WS Audiology in Erlangen Germany. He is responsible for the definition and validation of the audiological benefit of new hearing instrument platforms. Dr. Froehlich joined WS Audiology (then Siemens Audiology Group) in 2002, holding various positions in R&D, Product Management, and Marketing since then. He received his Ph.D. in Physics from Goettingen University, Germany.
Eric Branda, Au.D.
Dr. Branda is an audiologist and director of product management for Sivantos, Inc. He specializes in bringing new product innovations to market, helping Sivantos fulfill its goal of creating advanced hearing solutions for all types and degrees of hearing loss. Dr. Branda received his Doctor of Audiology degree from the Arizona School of Health Sciences and his master’s degree in audiology from the University of Akron.
Katja Freels, Dipl.-Ing.
Ms. Freels has been a research and development audiologist at WS Audiology in Erlangen Germany since 2008. Her main responsibilities include the coordination of clinical studies and research projects. Prior to joining WS Audiology (then Siemens Audiology Group), Ms. Freels worked as a dispensing audiologist. She studied Hearing Technology and Audiology at the University of Applied Sciences in Oldenburg, Germany.