Rechargeable Aids Discharging

Yes, of course, strictly speaking, they do not work with radio but with NFMI. I have oversimplified this. Nevertheless, the increased power consumption when beamforming is activated is understandable to me. To generate a magnetic field, current must flow.
Continuous transmission of audio signals between the two devices is not comparable to transmitting some data from time to time to synchronize the devices.

I would contend that NFMI is already used all the times for spatial processing already. It doesn’t just only sit idle and wait until bilateral beamforming before it gets activated exclusively for that functionality for speech in noise only. And I wouldn’t be surprised that because NFMI is always continuous as a requirement for spatial processing, they send the same information across that can be shared and used for both spatial processing and bilateral beamforming at the same time. I highly doubt that the devices sit idle unsynchronized most of the times, and only get synchronized occasionally when bilateral beamforming is needed.

Then how do we explain the high power consumption in speech-in-loud-noise?
It’s really noticable!

1 Like

It’s due to what I mentioned in post #15 above (partially repeated below in bold) → the analog amplifiers driving the receivers have to draw more power to amplify a lot more and a lot louder incoming sounds in noisy environments.

Ok I see your point.
I’ll do a test tomorrow: I switch manually to a beam forming program and keep it for the day (if I can stand it :slight_smile: )
I expect a quiet office day as usual :slight_smile:
We will see if there’s a significant difference.

Unless Phonak weighs in we’ll probably never know for sure, but as an engineer who has worked on both low-power processors and programmed DSP algorithms, it seems entirely plausible to me that there could be significant variations in power consumption due to the noise environment, as a result of different DSP stages being activated or having to work harder, and this could noticeably affect battery life.

I don’t think there are literally multiple DSP chips that get turned on an off, but I do think many of the features that Phonak lists for their hearing aids correspond to code that is enabled or disabled as needed. Running more code means using more power.

Furthermore, digital circuits (such as CPUs and DSPs) can run faster if you raise their voltage, at the cost of using much more power (power increases in proportion to voltage squared). So low-power designs often adjust the voltage to the lowest level at which they are still fast enough to do the processing required. If you’re in an environment that requires more DSP (e.g. speech in noise is a classic “hard” signal processing problem), it’s entirely possible the hearing aid raises the voltage to the DSP a little to let it run faster, thus allowing it to do more complex processing. The cost will be higher power consumption and thus less battery life.

Battery life is a huge issue for rechargeable hearing aids, so the engineers were motivated to use every trick at their disposal to maximize it.

1 Like

Thank you for sharing your professional opinion on this. Your insight is much appreciated. I definitely agree that if the voltage is raised to facilitate speeding up the clock to run more complicated processing then more power would be required. But I think that the universal assumption that noise suppression in hearing aids automatically and always involves more complex digital signal processing is debatable and not necessarily true in all cases for all hearing aid brands/models.

For most hearing aids that employ beamforming to suppress the surrounding noise by beamforming toward the front to pick up front speech, which is a very universal way to effectively improve signal to noise ratio for the front, and used by a majority of hearing aid brands, I would contend that beamforming is not a DSP-laden functionality, but it’s more or less using the physics of aligning the polar plots of the 2 mics on each of the hearing aids to make unilateral front beamforming happen, then exchange information with the other hearing aid to facilitate and improve it to become bilateral front beamforming. So this is more physics focused than DSP focused and therefore not really DSP ladened in the first place.

With Oticon, on the other hand, starting with their OPN then OPN S models, they don’t rely on the traditional front beamforming approach to suppress the surrounding noise like other brands do because they prescribe to the open paradigm. Because of this, they do use DSP to remove the diffused noise that surrounds the speech, regardless of whether the speech is in the front or in the back or the side. Don’t get me wrong, they do use a special kind of beamforming (called MVDR) to suppress noise signals coming from any direction, but it’s just not the (blind) frontal beamforming that other brands use. The first screenshot below shows where their Noise Removal block is in the signal processing path. But it is simply a “module” inside of the OpenSound Navigator (OSN), so it’s not a separate DSP all by itself. The OSN operates CONSTANTLY at 500 times per second to scan and analyze and balance and remove noise, so it’s not like the OSN can be willy nilly turned on and off, or its voltage can be lowered and raised depending on what environmental sound situations.

That is why many brands of aids (like Phonak) detects the changing soundscape using something like its AutoSense feature to determine what kind of environment it is, then switches to a distinct and separate program for Speech in Noise that would trigger the front beamforming using the mics. But for the Oticon OPN/S, the transition from simpler to more complex environments is not a distinct switch like with Phonak, but is continuous and smooth (because the OSN operates its scanning and analyzing and noise reduction at 500 times per second CONSTANTLY).

When Oticon moves to its More and Real, it switches to a Deep Neural Network approach where the noise reduction is an integral part inside the DNN (see the secpnd screenshot below), so it’s not a separate stand-alone DSP that can run at a different voltage and clock speed either. It doesn’t suppress noise the old way with front beamforming, nor does it suppress noise the way the OPNSound Navigator does either. The Noise Suppression is done simply by tweaking the neural network parameters to change how the sound scene is balanced, with more or less favor to the speech components and less to the surrounding noise. It is a form of filtering, but it’s not a stand-alone independent filter by a separate DSP that can have its voltage or clock independently adjusted. By now, it’s just changing the mathematical coefficients inside its deep neural network set of equations in order to deliver the desired result.

Sorry for the long-winded examples of how the Oticon aids do their noise reduction here. But it is to raise the point that short of using front beamforming the traditional way to suppress noise, like most HA brands have been doing, which is not really done intensively via DSP but more by using the physics of microphone polar plotting manipulation, newer noise reduction type approaches like as seen used by Oticon is really an integral part of a bigger DSP, and that type of noise reduction employed by these newer schemes is not something that can get turned on and off, with discrete voltage change or clock change manipulation as can be done in a sub stand-alone DSP anymore. It operates as a whole within the whole bigger DSP ecosystem.


2 Likes

I don’t really want to argue the details of signal processing and digital electronics, since this isn’t the right forum or audience, but I think you’re dramatically underestimating the DSP work involved in beamforming and in the other audio processing done in modern hearing aids. Granted, my direct experience is with radar, not audio, but the concepts are very similar even if the frequencies are different.

An overview of the math involved in beamforming is here: Beamforming Math | Math Encounters Blog

And an overview of some of the dynamic DSP going on in Phonak hearing aids is discussed (at a fairly high level) here: PH_Insight_SmartSpeech

Note that their beamforming is not locked to the front, and instead is steerable (through signal processing) to focus on sounds from other directions when appropriate.

I’ll also point out that while some DSP algorithms are data-independent, which is to say they do the same amount of work regardless of the input (as you describe above), there are also many algorithms in use that do more or less work depending on the input (the sound environment in this case).

1 Like

In my experience, different days can cause different terminal battery states. I have a motorhome. Like many motorhomes, mine makes many creaking and other sounds while driving. On travel days, my HAs will frequently drain their batteries to zero before bedtime. This never happens on non-travel days. So there is definitely a use pattern that uses more battery.

My solution is that I really don’t want to hear all those noises anyway, so I save battery life by turning off the HAs while driving. You cannot really do that at a party, so battery life will be shorter than normal.

I agree that this is not the right forum or audience to take up this discussion any further than where it is. Furthermore, I will concede and defer to some of the newer things you are pointing out here that beamfoming takes up more DSP processing power than I think it does, and the fact that Phonak doesn’t just do frontal beamforming, since I’m sure you have more DSP design experience and are much more savvy on Phonak technologies than I am, because I’m only an Oticon user, so I only have interests in learning about Oticon details and not Phonak technical details.

Having said that, while you’re probably right that Phonak may burn more power using its DSP due to the beamforming in its Speech in Noise program, I would still maintain that my personal (unprofessional) opinion is that most likely, that generalization probably would not apply to Oticon aids starting with the OPN onward to the More and Real. That’s because it’s pretty obvious from the OPN whitepapers that its open paradigm and the way they continuously scan and analyze and process the environment in this open paradigm and do noise reduction is not really with a discrete approach where DSP for noise reduction can be turned on or off (or its voltage adjusted) for beamforming processing like when the Phonak AutoSense would detect speech in noise and automatically switch to a discrete speech in noise program to enable beamforming only in that mode.

i have resound quattro’s. live in a rural location, ears in at approx 8am, out around 11.30pm. average maybe 4 hours streaming tv, another 2 via phone, and usually have about 3 lights on charger (say 30%). however, out to the pub / dinner / whatever, with a lot of noise and talking, always down to 2 lights, sometimes 1.
i think the harder they have to work, the more power they use :wink:
rather obvious i would think :wink:

We’ve all seen battery drain varying with conditions. But in this thread we’re comparing 60% charge remaining to 20% charge remaining. That’s 40% charge used vs. 80% charge used, assuming fully charged to start with. That’s a big difference.

1 Like

@seabeast

two years ago I got my Phonak Audeo Paradise P90R’s with rechargeable bateries. My dispensing audiologist gave me a walk through. Showed me how to use the conversation in noise program he had saved. Noisy lunch; i used it. I forgot to turn it off. Battery use was huge. About 3 hours later I had about 30% battery left.

As an engineer I’ve specified and maintained huge generators and battery stations. That’s the other end of the spectrum…I appreciate your experience and post here.

Using these hearing aids has been a horrible experience. Fixed now; I found a practitioner to set them up again Properly this time. They’re good now…

Setup is key for me. My Phonaks were working in Stupid Mode for 2 years.

DaveL
Toronto

1 Like

@Volusiano , @seabeast
I agree, we should stop this discussion here. It is speculation anyway. But it would certainly be very interesting to discuss this issue with one of the developers with more insight.
Thanks for all the information and opinions.

However, I don’t want to withhold the result of my test:
After 8 hours of manually activated Speech in Loud noise, my left hearing aid reported only a 15% charge. My right one could no longer be connected to the APP although it still worked perfectly. The right one was surprisingly always about 10-15% lower than the left one during today’s intermediate checks. My left one is the Bluetooth master and is usually lower than the right one.
Normally it takes about 16 hours in autosense to discharge to 20%.

I still think that the heavy traffic via NFMI for beamforming is the main reason for the high power consumption.
But anyway and however…

Couple other battery-related points. My info is Phonak-centric, but other brands are likely similar.

Bluetooth uses extra battery. There was an issue a couple years ago where some Android phones and Phonak hearing aids didn’t play well together, causing the HAs to use excessive battery power when connected to the phone. That has since been fixed, but it’s something to keep in mind if you ever have unexpectedly high battery usage and you use Bluetooth: try disabling Bluetooth and see if that helps. 5-10% more battery usage in the Bluetooth master hearing aid is apparently normal, but a larger difference could indicate a problem.

Phonak expects their rechargeable HAs to last “all day” under “normal” usage on a single charge, for a reasonable lifetime of the HAs. They have a whitepaper where they define some of these terms, but as I recall they define normal usage as something like 16 hours with up to 4 hours of Bluetooth streaming, and the lifetime is about 6 years (batteries lose capacity over time, but they expect them to still last all day for at least this many years). I don’t think these are guaranteed, exactly, but if you’re getting much worse battery life then I’d expect most manufacturers would consider this a problem and want to fix it.

A quick look at search results for “nfmi power consumption” reveals that it’s lower than even low-energy Bluetooth.

2 Likes

Hi @sterei, I do appreciate your sharing the result of your test, even though we all agree that we don’t need to go further with this. It’s worthwhile to know that with Phonak aids, Speech in Loud Noise does increase power consumption by a lot, even if forced to be used in quiet modes. So it’s not just the analog amplifier that may be the hog, but it can be the noise suppression as well.

1 Like

Just to clarify what I understand you’re trying to say here:

  1. If you’re streaming then BT connection is inevitable and so is the extra power drain.

  2. If you’re NOT streaming, but still have the BT connection established, as long as it doesn’t cause extra power drain because you’re not really streaming anything, even though connected, then it’s OK to stay connected.

  3. But if you’re not streaming but still are connected to BT with Phonak, and you notice unknown power drain, then it’s most likely due to the BT connection even though you’re not streaming at all. In which case it’s best to turn off the BT connection there if you’re not streaming.

You’re correct on #1 and #2 (Bluetooth is expected to use some extra power while you’re streaming, but when not actively streaming it should barely use any power even when connected). But my point on #3 was just that Bluetooth is a possible source of extra power drain, on Phonak or any other device with Bluetooth, so a useful experiment when your battery life is worse than expected is to try disabling Bluetooth (on your phone) and see if that makes a difference.

-Brett

1 Like

LOL… You are funny and I needed a laugh.