The newest hearing aid platform from Oticon promises to bring better speech understanding using it's Deep Neural Network or DNN. Oticon has taught it's newest hearing aids to identify sounds by feeding it millions of sounds in thousands of environments, allowing More to recognize and adapt to almost every scene.
During the development of the More platform Oticon used EEG testing to track patients' brain activity and based on the strength of the EEG signal, the reasearch showed that MoreSound Intelligence technology made the sound scene 60% clearer for patients.
More even offers a slight improvement from the already great OPN S, EEG testing revealed that More delivers 30% more sound to the brain and increases speech understand by yet another 15%.
How It Works
The More sound intelligence occurs first and processes the sound environment to provide clear contrast and balance for all sounds.
The MoreSound Amplifier takes the information and precisely amplifies the sounds in a way that makes it easy for the patient to orient and focus.
MoreSound then scans the full sound scene 500 times per second to capture an analysis of all sounds.
It then calculates the signal-to-noise ratio and noise levels to determine the complexity of the environment.
Benchmarks level of complexity against the patient's personal listening preferences.