Oticon Intent review at 3 weeks

One last question: Can you hear the TV or use the phone WITHOUT streaming and still have excellent comprehension? I find that my comprehension is above 95% when I stream to the TV, but if I’m in front of it without streaming, it drops to about 70% depending on the show. Many shows have VERY LOUD music or ambient sounds so we are mentally “hunched forward” to catch the rapid-fire dialogue. :neutral_face:

It’s there something particular to TV or the speakers be used that causes aids to struggle? Can they be improved without a steamer?

You can buy Oticon Edumic and Phonak Roger X for Edumic - it will be a bridge between all your Phonak accessories and Oticon hearing aids.

1 Like

I’ve been trialling the Intents for 1 week now. I am comparing to ReSound Linx 3D (my current aids) and Phonak Lumity 90s (trialled for 10 days). I don’t have my audiograms up yet, my hearing loss is moderate to severe in mid to high frequencies.

  • Speech in noise is the biggest improvement over both the other aids. I went to the restaurant today where I struggle to hear anyone, and the performance was a significant improvement.
  • I can also comprehend speech on the TV better (I am not streaming this)
  • Streaming of audio and calls is great, but you need a phone with BT LE (I have a Pixel 8). You can adjust the streaming EQ separate from the program EQ, and you can also choose which program to use when streaming (you cannot do this with the Phonaks).
  • I have had lost the BT connection twice so far. Had to disconnect and reconnect, all OK after that.
  • IMO the Companion app is not as good as the Phonak or ReSound equivalents.

My biggest issue is with the Music program, which I find to be truly awful compared to Phonak and ReSound (ReSound is the best). But I seem to be an outlier here, and I will be working with my audi to come up with something more suitable for me.

Overall the Intents are a contender for me. But I am trialling the ReSound Nexias next week.

4 Likes

Good luck sorting out the music program. Otherwise your experience seems to parallel mine. And, yes, at least for me the Oticon app borders on useless for anything other than checking battery levels.

Many folks have been able to sort out the MyMusic program on the Oticon aids to their satisfaction. But it might take some tinkering around, however. Oticon did something they think is special to the MyMusic program (they have a whole whitepaper about it if anybody is interested in reading up). They cited that many respondents were in favor of what they did with MyMusic. But it seems like musicians don’t find it very likable because Oticon might have added too much coloration to the gain profile instead of keeping it more truly authentic to keep the integrity of the music.

@flashb1024 is one of the forum members who had to tweak his MyMusic program, but did it successfully to his satisfaction. You might want to PM him for details.

If you look at the actual music reproduction it ‘really’ dumps in a lot of HF gain (4-8KHz) practically to the limit of the receiver. It’s also far more linear (less compressed) and the feedback management/processing is absent.

The main issue for me would be that it’s effective to do this in the middle of the receiver performance, with a moderate loss and normal canal resonances, but you could easily get into a situation where things become unstable or saturate the receiver resulting in some nasty peak distortion.

2 Likes

I’m happy with these new techniques that seem to be working wonders! My point is, neural networks are biological; silicon chips are not. And in fact actual neural networks have nothing in common with this technology. Chips are digital; neurons are not. Neural networks are more complex. they can heal in some instances. You cant turn a brain entirely off for 8 hours and then start it up again. We do that all the time with our HAs. I could go on.
this is akin to calling a radio a deep hammer, bone and stirrup cochlear nerve device. A radio is a great device, but it’s not anything like this last.
I mentioned the advertisers, who are using “deep neural network” as a metaphor for “'this is all highly advanced and has to do with the brain, so you know it’s smart. It’s DEEP!!! It involves neural networks!! Golly!!”

I don’t think you get a neural net in the HAs, they use one to develop the programming that goes into a controller in the HAs. Once that is programmed, it is fixed, not learning any more. It will adapt to your surroundings the way it was programmed.

WH

1 Like

Since you’re making assertions about how non-Oticon aids work, I trust that you’re up-to-date on the current technology used by all brands, not just Oticon?

2 Likes

Of course it’s obvious to anyone with a decent brain to know that neutral network advertised in hearing aids or silicon-based devices are not real biological neural brain matters (neurons). I don’t think you have to worry about anybody dumb enough to read this forum to think that, so much so that you think clarification is needed here. What’s the phrase? → “It goes without saying”.

Everyone understands that Oticon is talking about MODELING here, to mimic how the brain learns from data and adapts to improve. That’s why it’s called artificial intelligence and not called real intelligence.

But you can’t say that there’s nothing in common between the how the biological neural network works, and the mathematical modeling of how a neural network works. Both uses massive amounts of real world data to learn and adapt and gets better at what it is doing. As much as the biological neurons in the brain are linked together with nerve connectors to communicate and evaluate and process input data from the sensorial facilities like the ears and eyes, and store learned information in the neurons; modeled representations of neural cells are also linked together between emulated neural cells that contain storage data to communicate and evaluate and process input data collected and quantized as digital data from the hearing aids’ mics.

The science of mathematical modeling (then translated into silicon implementation) is a very fundamental technique that is used to mimic how nature solves issues in the most efficient ways.

1 Like

From what I’ve read, there’s no way to extract the “lessons learned” from a neural net. Assuming that, there must be a trained neural net on the aids.

1 Like

The neural net forms a matrix which can be implemented in silicon. They’ve been doing this for years.

WH

I know you really like to cherry pick to find faults with what I say because apparently there’s been a long history of your doing so. But that’s OK, I don’t mind your frequent challenges and I’ve always answered them.

I would say that I know enough in general about technologies used by other hearing aids’ brands to compare them in general with how Oticon does differently. I never claim that I know other HA’s technologies as well as Oticon. If anything I compare between Oticon and other aids’ technologies is wrong, I’m sure there are knowledgeable people on this forum who would not be afraid to correct me. And I wouldn’t be afraid to stand corrected on anything I say or assume about other aids’s technologies that are wrong.

But I don’t have to know other aids’ technologies intimately before I can make comparisons between theirs and Oticon’s. Just a general enough understanding will do for me

3 Likes

well, no. If “no one is dumb enough to believe the claim”, why would advertisers make it? In any case, you go on to write that “the brain learns from data and adapts to improve”. Sorry, this has it exactly backwards. So called ‘deep neural net’ chips, which you admit have nothing to do with REAL biological neural networks–you claim that the brain operates like a computer program: again, “learns from data and adapts to improve”. This last does indeed describe how the chips in a HA are coming to operate–except–and this is important–chips can’t ‘learn’ anything. Not in the human sense. Chips can change how sound is processed, according to mathematical parameters. They can’t learn, as a child learns not to touch a hot stove, or to appreciate fully the works of Shakespeare, which involves emotion as well as mind–chips are dumb. Chips are dead. Chips are silicon. Rocks can’t “learn”.
Again: biological neural nets aren’t data processing machines. That’s the position of the AI researchers, I suppose. Ask yourself: when you admire a sunset, or embrace your child, or make love, or write a poem: are you a data processing machine?
More to the point: when you listen a late Beethoven string quartet: are you a data processing machine?

1 Like

I’m probably using the wrong terminology. But is the matrix in some sense a “read-only neural net” that supports lookups but doesn’t have the data structures and links to allow updating i.e. additional learning?

Yes, once the training is done, they freeze it and implement the results in inexpensive, low power hardware. Probably an ASIC. It reacts to the inputs in the way it was trained to respond.

WH

@jeffrey, you might enjoy Wired. It’s a magazine about technology for non-technical as well as technical readers. It’s about philosophy and social issues around technology, not bits and bytes.

So you keep up to date on the latest-and-greatest technology from all the manufacturers, in enough detail to compare them with Oticon’s latest? Just what you’ve said about autosense-like technologies seems clearly wrong. I know ReSound doesn’t use that paradigm. Even Phonak, with the Lumity, is moving toward a single program that can handle multiple situations.

You keep insisting falsely that Oticon claims that there’s a human brain inside their hearing aids. Oticon never claims that they have a human brain inside their hearing aids. What they claim is that they implemented a MODEL that mimics after the deep neural network that exists in the brain.

Name one person on this forum that says chips can learn in the human sense, me included. Please provide direct quotes. I think you’re putting words in people’s mouths, and Oticon’s mouth, too, so that you can make everybody look bad.

The Oticon humans did the training and learning, not the chips. The Oticon humans created a MODEL (not a real flesh and blood) of a neural network. Albeit, it’s a very crude model that is focused entirely in processing sound, unlike what is inside a real brain because that would be much more complicated.

The training was done by the Oticon humans capturing millions of data samples, then feeding into the model, then comparing the simulated outputs to the real outputs. Discrepancies are propagated back into the neural model, and mathematical manipulations adjust the parameters with the neurons to find the optimal values that would minimize the discrepancies. The culmination of the training and learning (ALL DONE by the Oticon humans in the lab) is optimized parameters associated with the neurons in the neural model that would yield simulated results that would match real results close enough to be effective, after millions of data points. These optimized parameters are captured and implemented inside the silicon that makes up the chip.

These arguments you make here are moot in the first place. They don’t apply because Oticon never claim that their hearing aids have a real biological neural network inside them. It’s SO OBVIOUS that it can’t be real biological neural network.

PS. This will be the last of my argument on this topic, because honestly I don’t even know why we need to go down into this rat hole in the first place. You can keep on arguing to anyone else who will engage, but I’m not going to waste my time on this anymore.

4 Likes