User Review of Whisper Hearing Aids

Some questions. Some of them may be dumb. I haven’t really ‘got’ machine learning at this point. I appreciate that you may have not used the device long enough to have an opinion on some of them.

Range?

Is there any learning of individual voices? For example, would you hear your significant other’s voice more clearly than others in a crowded room?

Is the effect of the brain instant or does it build? Corollary to this: If you turn off the brain while your’re in a challenging environment, does the effect linger? So is the receiver being primed in any way or is it instant-by-instant processing in the brain?

Is communication between the two devices reliable especially in spaces with lots of other devices communicating with each other? Would you know if it wasn’t?

Whether it constitutes a quantum leap or not, the fact that a company could start from scratch and, in only a few short years, produce a HA system that rivals or exceeds the performance of the best HAs from an established company is pretty impressive, in my opinion. I think it bodes well for the possibility of further significant improvements.

@billgem, a couple of questions: I assume your hearing a conversation 20 feet away at the soccer game happened while using the Whisper brain. Did you try turning off the brain to see what difference it might make?
Also, did your ability to hear a conversation 20 feet away that you weren’t necessarily trying to hear interfere with your ability to hear a speaker right in front of you whom you WERE trying to hear? Thanks for your comprehensive and extremely helpful posts.

It’s easy to see how. They poached the VP and the sales guy and some engineers from Oticon to jockey and leap frog into the position they are right now. And there’s nothing wrong with that, it’s actually a smart move. Even a necessary move, I would say, to survive… But it doesn’t imply that they have genius-level technological prowess in-house which can produce rapid quantum leaps that would require a brain. At least not yet.

I’m not rooting for Whisper to fail, even though I’m expressing doubts about their need for a brain, from a technical point of view. Of course they need a differentiation to stand out and attract attention, or else it’d be hard to compete against the big 6. So that differentiation is their idea for a brain and their lease model. I think it’s a big gamble, and only time will tell if they’ll succeed.

Maybe one day if they’re successful enough to lower their subscription price to $50/month and they can install and run an app on my smartphone in place of a brain (using the app on the phone as the brain), that may be when I may jump in and sign up. Even if that means that I’d have to buy (and own outright) the ear pieces for a fixed low price ($1K/pair?) because it’s not part of the subscription lease to keep it down to $50/month. It’d be even better if there’s another option to pay to use the app per use event only, like only on days when I go out to a noisy restaurant and need the extra help from the brain. If I’m retired and at home on most days, I wouldn’t need or want to pay for that monthly premium brain service anyway, especially if the ear pieces are already a premium HA device on its own without the brain as advertised.

Wow! Pay per use? Interesting idea, but it’s really taking the discussion back into the underbrush, where it got hung up before, isn’t it?

IMO, discussing merits/demerits of the Whisper business model is secondary to the question “Does this product offer appreciably better performance than autonomous HAs?” … and “Is the performance sufficiently better to offset the disadvantages of being increasingly dependent on a manufacturer for the day-to-day functionality of my hearing devices?”

As @d_Wooluf pointed out in an earlier post: the consumer’s POV trumps everything else, for the purpose of appreciating what @billgem is attempting to share with us.

[Personally, I think the Whisper model has the potential to become even more oppressive than the current industry paradigm … not sure I’m interested enough in beating that horse again to start a thread.]

Bill, these links require that I disclose too much of my personal information to gain access. Can you please provide a less intrusive “one click link” to this information?

Here are direct links to the two white papers, Jim.

Just for you, Jim, I’m going to include an article which appeared in a Canadian journal, written by Don Schum:

Sorry if I don’t respond to much over the next couple of days, but I’ll be tied up with doctors’ appointments. When I return, there will be a test on the assigned readings. (I’m a retired teacher. :wink: )

1 Like

@billgem: Thanks very much, Bill. I’m enjoying following your journey, and learning a lot in the process.

Special good luck at the doc’s! :blush::+1:t2:

At my fitting I was told that all Whisper customers, not just early adopters, will get new hardware as it comes out.

1 Like

It’s very true, Jim, that many of us remember the perils of going far down the many different rat holes in the original thread on the Whisper discussion, and we want to avoid this thread heading in that same direction again. Now that we do have at least 2 folks (@billgem and @x475aws, and hopefully more soon) actually doing trial on the Whisper system with first hand real life experience, we’re definitely interested in hearing about their opinions and what they learn so that we can have a more relevant discussion based on the actual perceived performance of the device as the primary focus on the thread instead of just guessing like before.

It seems like actual experience on performance is one aspect that can finally now be shared and has already been shared, and hopefully will be shared some more as they gain more exposure to different scenarios in the future. But it looks like there’s only so much about the performance that they can share a bit at a time as they go, then after that there’s not much else to talk about, so they went on to share what they think about the other merits, like the ability to update regularly, the continuous data collection, the clever idea of the brain to support rapid advances in technologies, the prowess of Whisper as a startup company, etc. So I know that my responses to those opinions is not related to the performance topic per se, but nevertheless, the doors were opened to start those discussions, so I entered to doors, so-to-speak. As you mentioned @d_Wooluf pointed out, the consumer’s POV trumps everything else. Well, these non-performance side discussions are also the consumers’ POV being expressed.

But I agree entirely with you that we want to make these things secondary to the discussion of the performance of the Whisper system in this thread. As soon as there’s more information to be shared about the performance experience, I’m sure that it’ll be back on track and the performance experience becomes the primary focus again.

Whisper is saying that if there’s a big advancement in technology ready to be implemented, currently the big 6 must wait for a few years to be able to develop a new platform (to put on the hearing aid) that can support this big advancement. But with their brain, Whisper doesn’t need to wait a few years for a new platform development to catch up and implement, because their brain can provide a platform that supports several generations ahead already.

I’m saying that it is not the case. I’m saying that the big 6 are not being held up by not having the right platform available (to put on the hearing aid) and must wait a few years to implement their big advancement, like Whisper is implying. It takes time to develop and come up with a big technology advancement. In concurrent to the time it takes to develop this big technology advancement, the big 6 doesn’t (and wouldn’t) wait and is already working on developing a new platform (to be put on the hearing aid) that can support this big advancement in parallel with that. By the time that the big advancement is done with development and ready for implementation, the new platform is also already available to support and execute this implementation, on the hearing aid.

Rapid new platform development and going down to a smaller silicon geometry to support the new big advancement is never an issue at all in the HA industry because the silicon geometry used by the HA industry is nowhere near as advanced as the silicon geometry needed for the computer industry.

@Volusiano: I agree wholeheartedly with what you’re saying in this latest post. I didn’t post to criticize you - I, and many others enjoy reading your articulate analyses (whether they’re strictly topical, or not).

I posted what I did in the open Forum because I thought it was becoming of us to let @d_Wooluf know that they had been heard by us, and that we (myself having been guilty of contributing to the significant drift of the last ill-fated Whisper thread) were not meaning to “descend down the same rat holes” we did, last time.

Like I said, I didn’t mean to level a criticism - least of all one pointed at you. I just though it was becoming and collegial to hoist the flag and signal to the other member that their request had been duly noted.

I trust that I have not offended you.

1 Like

Not at all, Jim. I’ve never been offended by you and I doubt it’ll happen anytime, soon. I must admit that while I gave my responses to those side points being made, I was fully aware that I didn’t want to get into the same rabbit holes as the previous thread again. But nevertheless, because the door was open for those discussions, and I really have a strong opinion about those things that I want to say, so I took a chance to say it, at the risk of side tracking the thread.

Your post and @d_Wooluf’s posts were a welcome reminder for us all to keep things on track for this thread better than how we did in the last thread. So they’re appreciated and not offending anyone, at least not me. I also had a post where I promised @d_Wooluf that I’d try to refrain from unsolicited opinion. Most of my comments so far were kind of solicited, although the bit about $50/month with a per use-event and a brain inside a smart phone were indeed unsolicited opinion for sure, I must confess.

Thanks for sharing this balanced review @billgem ! Appreciate all the thought that went into it. It seems it has generated an interesting discussion here. I for one have been very curious to hear from legit users of the Whisper product (there don’t seem to be many in our communities), so this is especially valuable to me. And thanks to everyone else for engaging in a lively discussion !

4 Likes

Brain discharge rate seems to vary greatly with the environment. I was traveling for my first two full days of Whisper usage, and I’m back home today. I would say battery drain today, in my fairly peaceful but not monastic residence, is half of what I saw the last two days.

My Whisper thread is now locked, with a request that Whisper discussion go here. Let me repeat here the “sound (not speech) in noise” experience I had with Whisper, because I think it’s important.

At the gym on an elliptical, with a little fan grille blowing air in my face. The machine makes a beeping sound to mark the end of an interval. When I wear my Quattros, the fan drone is attenuated, and the interval beeps are also attenuated and sound kind of distant. Whisper attenuates the fan also, I think, but when the interval beep sounds, the fan is attenuated further, and the beep is clearer. When the beep ends, the fan drone loudness is restored. So it’s treating the interval beep as a meaningful non-speech sound, as opposed to the fan drone which is noise.

It seems significant that Whisper is able to separate out sounds that aren’t speech. But I’m comparing Whisper to 2018-vintage hearing aids here. Does anyone have comparable experience with newer aids that they could share?

1 Like

I did not, but I did switch to my Opn 1’s and could not hear that conversation.

It did not interfere with my ability to hear a nearby conversation. It’s that Oticon brain hearing concept. I was able to choose what to pay attention to, but it raised the same question in my mind. While Whisper does an excellent job separating speech noise from non-speech noise, I wondered about speech in a din becoming noise competition for the speech you want to hear. So, the Whisper system will not do what beamforming + suppression of background noise does.

In my example of hearing a conversation 20’ away in a banquet room, I was unable to hear a woman sitting 2 people away from me at our table. However, she is a classic “low talker” (see Seinfeld) and my wife, who can hear a pin drop 2 rooms away, had trouble hearing this woman, and my wife was sitting right next to her. I guess that some situations are just challenging for everyone, aided or unaided.

1 Like

@x475aws: I did read this in your other thread. Can you think of a way that I could challenge my More1s with a similar situation? (What if I defrost something in the microwave and see whether they attenuate the magnetron when the turn “food over” beeper sounds?)

This is good news for me, as is DaBrain’s potentially-longer-than-16-hours charge capacity.

I’m sure Whisper will be able to increase this. [… I find this so challenging!]

  1. There is no learning individual voices. As I understand it, the improvement of these hearing aids does not coming that kind of learning. The HAs have enhanced memory capabilities and digital information about your listening experiences is saved. This information falls into 2 categories: sources (voices) and environments. The types of information saved includes for voices: pitch, volume or intensity, harmonics (timbre), patterns & repetition, and variance (stationary vs non-stationary) and for environments: room size & shape, density (water vs air), surface quality (smooth vs spikes), materials (foam vs metal), and interfering objects. At quarterly audiology appointments, this information is downloaded by the audiologist and stored so that it can be combined with all other user data so that the system can be taught to improve its processing of sound under these various circumstances. At the next quarterly appointment, these collective improvements are then delivered to the individual user in the form of software upgrades.

  2. The AI is used to learn speech patterns, which are unique as compared with any other kinds of noise, and then uses this recognition of speech sounds to separate it from background noise and emphasize it over the other background noises. This method has greater potential than frequency programming because non-speech noises which share the same frequency as certain speech sounds will receive the same emphasis as those speed sounds. As the AI learns speech characteristics, it can also make predictions where approximations are the only option and separate speech from noise based on this.

  3. The only improvement of the brain is via the periodic software upgrades, similar to software upgrades on your smart phone or smart TV.

  4. The Brain stays connected to the hearing aids whenever they are in proximity. It can’t be turned off when they are in the same space. When the brain is on, it adds the power of 5 additional processors to the processors in the ear pieces. When the brain is not connected, there are no lingering effects.

  5. While I can’t say from personal experience, from what I’ve read communication between the brain and the ear pieces is reliable regardless of what other devices are operating in the same space. The engineers used cell phone technology to develop this system. They then developed a proprietary upgrade to eliminate any lag that exists in cell phone communication. So, ask yourself if your cell phone is reliable (as long as it can get a signal) in the presence of other competing devices. I have not experienced any problems with the connection or with any lag time.

I apologize in advance if I’ve made any errors in these explanations, but you can get a fuller explanation of these technical questions in the 2 white papers which I’ve linked in this thread.

1 Like

Just reiterating that in the Webinar I saw, the question of who is eligible for new versions of hardware was addressed. Whoever addressed the question (Schum?) was quite specific that only Brains Trust participants were eligible.