0% found this document useful (0 votes)
16 views

AutoSense OS White Paper

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

AutoSense OS White Paper

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CLINICAL RESEARCH

AutoSense OS™ 3.0 Operating System


Allowing Marvel CI users to seamlessly connect with the moments they love

INTRODUCTION information. Every 0.4 seconds, the algorithm calculates the


statistical probability that the listening environment matches one of
As cochlear implant (CI) recipients move through their day, they
seven pre-defined sound classes. Based on these probabilities,
encounter a variety of acoustic environments, each of which place
AutoSense OS 3.0 determines a combination or blend of sound
unique listening demands upon them. AutoSense OS 3.0™, first
classes that best match the listener’s environment. The appropriate
introduced in Phonak hearing aids, is an operating system
blend of sound classes, features, and settings for that environment
designed to automatically steer sound processing, optimizing
is then automatically activated. This optimizes sound quality and
hearing performance and recipient comfort while minimizing the
speech intelligibility by applying sound processing intelligently
need for manual changes1,2. In the Advanced Bionics Marvel CI
based on real-time analysis of the listening environment.
sound processors, Naida CI M and Sky CI M, AutoSense OS 3.0
and AutoSense Sky OS 3.0 have been adapted and optimized
Acoustic environments are classified into seven different sound
for CI users.
classes: Calm Situation, Speech in Noise, Speech in Loud Noise,
Speech in Car, Comfort in Noise, Comfort in Echo, and Music (see
HOW DOES AUTOSENSE OS 3.0 WORK?
Figure 1). Of these, Speech in Loud Noise, Music, and Speech in
AutoSense OS 3.0 leverages artificial intelligence that was trained Car are exclusive classes, meaning that these classes cannot be
using modern machine learning methods. Training this system for blended with other sound classes. The other four sound classes
classification of listening environments involved analysis of can be activated either exclusively or as a blend, with the possibility
thousands of sound scenes that reflect the broad range of acoustic for up to three sound classes to be blended at once. For example,
experiences that are encountered during daily life. The result is a Comfort in Echo, Comfort in Noise, and Speech in Noise may be
system that monitors acoustic parameters, including level blended when the listener is having a conversation in a moderately
differences, estimated signal-to-noise ratios, synchrony of temporal noisy environment with high ceilings. This advanced technology
onsets across frequency bands, amplitude, and spectral allows AutoSense OS 3.0 to create over 200 possible settings that

Figure 1: Acoustic and streaming classification in AutoSense OS 3.0.


are designed to optimize listening based on the CI user’s dynamic
sound environment and the settings programmed by the FEATURE PURPOSE

professional. In addition to classifying acoustic environments,


Mimic response and
AutoSense OS 3.0 is the first classifier in the industry to classify Real Ear Sound benefit of the T-Mic15-18 by
streamed audio signals. Input is classified as Media Speech or (RES) mildly attenuating sounds
from behind the listener19
Media Music, and the appropriate parameters are adjusted to
optimize the listener’s audio streaming experience, whether they are Adaptively attenuate
watching TV, listening to music, or streaming podcasts. noise sources located on
the sides and back to
UltraZoom
improve face-to-face
Clinicians can review the proportion of time each sound class speech understanding
Microphone
(acoustic and streaming) is active via the datalog section of the Modes in noise20-25
Target CI programming software. Figure 2 shows sample data from
UltraZoom plus additional
a Marvel CI processor fit on the left ear of a recipient. UltraZoom
noise cancellation based
+ SNR Boost
on location of noise26

Create narrow focus


for front-facing
StereoZoom communication in extreme
noise for bilateral or
bimodal recipients21, 24

Reduce bothersome
NoiseBlock background noise (for M
Acoustic Earhook users)

Reduce bothersome
Front-end WindBlock
impact of wind noise
Processing
Features Reduce bothersome
(FEPs) SoundRelax impact of sudden
impulse sounds

Reduce bothersome
EchoBlock
Figure 2: Example of information regarding AutoSense OS impact of reverberation
3.0 activity from a CI unilateral recipient’s datalog, as viewed
in the Target CI fitting software. Table 1: Automatic microphone modes and front-end
processing features available in the Marvel CI sound
processor system.

MARVEL CI MICROPHONE MODES AND


Figure 3 displays an overview of the microphone modes and
FRONT-END SOUND CLEANING FEATURES
front-end processing features that are activated within each
Once finished with classifying the recipient’s listening acoustic sound class of AutoSense OS 3.0. The strength and the
environment, AutoSense OS 3.0 activates the microphone mode blend of the microphone modes and front-end processing features
and front-end processing feature(s) that are appropriate for that will change depending on the environment, which allows for
environment (Table 1). These features are complemented by AB optimized listening all day, every day. The seamless and automatic
sound processing technologies and proven noise reduction activation of these features is designed to allow CI users to move
features like ClearVoice3-7, SoftVoice8,9, industry-leading wide from one listening environment to the next without the need to
Input Dynamic Range (IDR)10,11, dual-loop Automatic Gain manually adjust their sound processor settings as they go.
Control (AGC)12,13, and Span14. Recipients do not need to manage various programs for different
listening environments.
100
90
25% improvement
80
70

Percent Correct
60
50
40
30
20
10
0
Quiet Noise

Figure 3: AutoSense OS 3.0 sound classes and associated AutoSense Off AutoSense On
microphone modes and front-end processing features
Figure 4: Speech recognition scores are shown for
(FEPs). Note that default strength settings for FEPs may vary
AutoSense OS 3.0 ON and OFF with the Marvel CI sound
by sound class.
processor. Performance in quiet was similar with AutoSense
OS 3.0 ON and OFF. Performance in noise was better for all
subjects with AutoSense OS 3.0 ON relative to AutoSense
CLINICAL RESULTS WITH OS 3.0 OFF.30
AUTOSENSE OS 3.0
Studies with Phonak Marvel hearing aid users have demonstrated
better speech perception in real-world listening conditions when AUTOSENSE SKY OS 3.0
using a program with AutoSense OS 3.0 technology . Participants
1,2
AutoSense Sky OS 3.0 is the world’s first operating system for a
in these studies preferred the AutoSense OS 3.0 program to their CI sound processor designed for children Sky CI M31,32.
manually selected program in all listening conditions. As a part of AutoSense Sky OS 3.0 was developed specifically for the
the Marvel CI development, multiple clinical studies provided listening environments children experience to ensure that kids
evidence for an improved listening experience with AutoSense OS and teens have the optimal listening experience in their unique
3.0 in Advanced Bionics CI recipients27,28,29. listening situations, including classrooms, libraries, the
playground, and when listening to music. AutoSense Sky OS 3.0
In one study completed at the labs of Advanced Bionics, LLC30, applies customized settings without the need for manual
ten adult CI recipients underwent speech recognition tests in adjustments, so kids and their parents can focus on more of life’s
quiet and in noise. Target stimuli (AzBio sentences at 65 dBA) great adventures and let the sound processor do the work.
were presented from a loudspeaker at 0 degrees azimuth. Noise
(multi-talker babble at +5 dB SNR) was presented from a SUMMARY
loudspeaker positioned at 180 degrees.
AutoSense OS 3.0 and AutoSense Sky OS 3.0 combine
innovations from AB and Phonak that are designed to provide
Figure 4 shows speech recognition scores in quiet and in noise with
adult and pediatric recipients with a powerful hearing experience
AutoSense OS 3.0 set to OFF and ON. As hypothesized,
and the ultimate ease of use throughout the day, reducing the
performance was equivalent in quiet and an improvement (25
need to manually switch programs. AutoSense OS 3.0 provides
percentage points) was observed in noise with AutoSense OS 3.0
improved speech recognition in noisy listening environments by
set to ON, demonstrating the benefit provided when AutoSense OS
seamlessly activating the microphone modes and processing
3.0 adapts to the user’s listening environment.
features that are appropriate for the environment in real time. The
automatic adaptation to the listening environment enabled by
AutoSense OS 3.0 allows Marvel CI users to stay fully connected
to the world, rather than focusing on how their CI sound
processor is functioning.
REFERENCES 20. Buechner A, Dyballa K-H, Hehrmann P, Fredelake S, Lenarz T. (2014).
Advanced Beamformers for Cochlear Implant Users: Acute Measurement of
1. Rodrigues T, Liebe S. (2018). Phonak AutoSense OS™ 3.0: The New & Enhanced Speech Perception in Challenging Listening Conditions. Wanunu M, ed. PLoS
Automatic Operating System. Phonak Insight, retrieved from www.phonakpro. ONE. 9(4):e95542.
com/evidence, accessed Jan 22nd, 2021.
21. Advanced Bionics. (2015). Auto UltraZoom and StereoZoom Features: Unique
2. Rakita L, Jones C. (2015). Performance and Preference of an Automatic Hearing Naída CI Q90 Solutions for Hearing in Challenging Listening Environments.
Aid System in Real-World Listening Environments. Hearing Review, 22(12): 28. White Paper.
3. Buechner A, Brendel M, Saalfeld H, Litvak L, Frohne-Buechner C, Lenarz T. 22. Geißler G, Arweiler I, Hehrmann P, Lenarz T, Hamacher V, Büchner A. (2015).
(2010). Results of a Pilot Study With a Signal Enhancement Algorithm for HiRes Speech Reception Threshold Benefits in Cochlear Implant Users with an
120 Cochlear Implant Users: Otology & Neurotology. 31(9):1386-90. Adaptive Beamformer in Real Life Situations. Cochlear Implants International.
4. Advanced Bionics. (2012). ClearVoice. Clinical Results. White Paper. 16(2): 69-76.

5. Koch DB, Quick A, Osberger MJ, Saoji A, Litvak L. (2014). Enhanced Hearing in 23. Agrawal S. (2016). Effectiveness of an Automatic Directional Microphone for
Noise for Cochlear Implant Recipients: Clinical Trial Results for a Commercially Cochlear Implant Recipients. Presentation at the 43rd Annual Scientific and
Available Speech-Enhancement Strategy. Otology & Neurotology. 35(5): 803-809. Technology Conference of the American Auditory Society. 3-5 March,
Scottsdale, AZ.
6. Dingemanse JG, Goedegebure A. (2015). Application of Noise Reduction
Algorithm ClearVoice in Cochlear Implant Processing: Effects on Noise Tolerance 24. Ernst A, Anton K, Brendel M, Battmer RD. (2019). Benefit of Directional
and Speech Intelligibility in Noise in Relation to Spectral Resolution. Ear and Microphones for Unilateral, Bilateral and Bimodal Cochlear Implant Users.
Hearing. 36(3): 357-367. Cochlear Implants International. 20(3):147-157.

7. Wolfe J, Morais M, Schafer E, Agrawal S, Koch D. (2015). Evaluation of Speech 25. Dorman MF, Natale SC, Agrawal S. (2020). The Benefit of Remote and On-Ear
Recognition of Cochlear Implant Recipients Using Adaptive, Digital Remote Directional Microphone Technology Persists in the Presence of Visual
Microphone Technology and a Speech Enhancement Sound Processing Information. Journal of the American Academy of Audiology. DOI:
Algorithm. Journal of the American Academy of Audiology. 26(5): 502-508. 10.1055/s-0040-1718893. Online ahead of print.

8. Holden LK, Firszt JB, Reeder RM, Dwyer NY, Stein AL, Litvak LM. (2018). 26. Naída S and Zoom Technology. State of the Art Directionality for Power Users.
Evaluation of a New Algorithm to Optimize Audibility in Cochlear Implant (2011). Field Study News, retrieved from, www.phonakpro.com/evidence,
Recipients: Ear and Hearing. 40(4): 990-1000. accessed January 22, 2021.

9. Stein AL, Litvak LM, Kim E, Norris J. (2018). Improved Perception of Soft Sounds 27. Advanced Bionics. Data on file. D000028997.
for Cochlear Implant Recipients. Presentation at the Emerging Issues in Cochlear 28. Advanced Bionics. Data on file. D000029084.
Implantation - CI2018. 7-10 March, Washington, DC.
29. Buchner, A. (2021). Investigation of the Naída CI M90 Sound Processor in
10. Spahr AJ, Dorman MF, Loiselle LH. (2007). Performance of Patients Using Various Acoustic Scenarios. Presented at the Audiological Advisory Council
Different Cochlear Implant Systems: Effects of Input Dynamic Range: Ear and meeting. 27-28 January, Germany.
Hearing. 28(2): 260-275.
30. Advanced Bionics. Data on file. D000028990.
11. Holden LK, Reeder RM, Firszt JB, Finley CC. (2011). Optimizing the Perception
31. Feilner, M., Rich, S., & Jones, C. (2016). Automatic and Directional for Kids
of Soft Speech and Speech in Noise with the Advanced Bionics Cochlear
- Scientific Background and Implementation of Pediatric Optimized Automatic
Implant System. International Journal of Audiology. 50(4): 255-269.
Functions. Phonak Insight, retrieved from www.phonakpro.com/evidence,
12. Firszt JB, Holden LK, Skinner MW, Tobey EA, Peterson A, Gaggl W, Runge- accessed Jan 22nd, 2021.
Samuelson CL, Wackym PA. (2004). Recognition of Speech Presented at Soft to
32. Advanced Bionics. Data on file. D000023832.
Loud Levels by Adult Cochlear Implant Recipients of Three Cochlear Implant
Systems. Ear and Hearing. 25(4): 375-387.

13. Boyle PJ, Büchner A, Stone MA, Lenarz T, Moore BC. (2009). Comparison of
Dual-Time-Constant and Fast-Acting Automatic Gain Control (AGC) Systems in
Cochlear Implants. International Journal of Audiology. 48(4): 211-21.

14. Saoji A, Litvak L, Boyle P. (2010). SPAN: Improved Current Steering on the
Advanced Bionics CII and HiRes90K System. Cochlear Implants International.
11(Suppl 1): 465-8.

15. G
 ifford RH, Revit LJ. Speech Perception for Adult Cochlear Implant (2010).
Recipients in a Realistic Background Noise: Effectiveness of Preprocessing
Strategies and External Options for Improving Speech Recognition in Noise.
Journal of the American Academy of Audiology. 21(7): 441-451.
Advanced Bionics AG
16. K
 olberg ER, Sheffield SW, Davis TJ, Sunderhaus LW, Gifford RH. (2015). Laubisrütistrasse 28, 8712 Stäfa, Switzerland
Cochlear Implant Microphone Location Affects Speech Recognition in Diffuse T: +41.58.928.78.00
Noise. Journal of the American Academy of Audiology. 26(1): 51-58. F: +41.58.928.78.90
info.switzerland@AdvancedBionics.com
17. Agrawal S. (2014). Technologies For Improving Speech Understanding In Noise
In Cochlear Implant Recipients. Presentation at the 14th Symposium on
Advanced Bionics LLC
Cochlear Implants in Children. 11-13 December, Nashville, TN.
28515 Westinghouse Place
18. Advanced Bionics. (2015). Advanced Bionics Technologies for Understanding Valencia, CA 91355, United States
Speech in Noise. White Paper. T: +1.877.829.0026
T: +1.661.362.1400
19. Chen C, Stein AL, Milczynski M, Litvak LM, Reich A. (2015). Simulating Pinna
F: +1.661.362.1500
Effect by Use of the Real Ear Sound Algorithm in Advanced Bionics Ci
info.us@AdvancedBionics.com
Recipients: Conference on Implantable Auditory Prostheses. July 12-17, Lake
Tahoe, CA.
For information on additional AB locations, please visit
AdvancedBionics.com/contact

027-N285-02 ©2021 Advanced Bionics AG and affiliates. All rights reserved.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy