Audio device lazy initialization

The topic of delaying initialization of the audio layer in Blender has come up again recently.

I have loked into managing device state by using AUD_init() and AUD_exit() from Blender side. This can lead to memory leaks and crashes if some data structures aren’t managed properly. Also code would be harder to maintalin and understand without knowledge of Audaspace internals.

Better approach seems to be to implement this feature on Audaspace side. I have tested this with WASAPI, OpenAL and Core Audio. So far, only OpenAL implementation proved to be problematic.
In general, the approach is to initialize the device when playback starts and release the device when playback stops.

There is a rather simplistic patch for Core Audio, that implements this feature with builds available. More technical details and todo’s are in patch description.

Feedback is welcome!

Bit of an odd thread, you want feedback but supply no context, which makes giving feedback difficult.

The topic of delaying initialization of the audio layer in Blender has come up again recently.

This sounds like a means to an end, delaying initialization doesn’ t seem like a goal on its own, It’s likely meant to solve some kind of problem, but it’s unclear from the information given what this problem is? You say it has come up again, can you add some context here, how did it come up? when was the last time this came up? why didn’t we fix “the problem” back then?

I have loked into managing device state by using AUD_init() and AUD_exit() from Blender side. This can lead to memory leaks and crashes if some data structures aren’t managed properly. Also code would be harder to maintalin and understand without knowledge of Audaspace internals.

I expect to be able to call any library’s init/exit functions at any time without having to worry about its internal state. If audaspace has issues in that department, perhaps it’s good to fix and upstream those, or report to @neXyon and have them fixed, as audaspace would be a better library for it.

Better approach seems to be to implement this feature on Audaspace side.

Why is this better?

I have tested this with WASAPI, OpenAL and Core Audio. So far, only OpenAL implementation proved to be problematic.

Why was it problematic?

In general, the approach is to initialize the device when playback starts and release the device when playback stops.

Certainly sounds like something you could do, but I still have no idea what the problem is you are solving, so unable to decide if this is a good solution or not.

There is a rather simplistic patch for Core Audio, that implements this feature with builds available. More technical details and todo’s are in patch description.

Kinda strange, you tested with WASAPI, OpenAL and Core Audio, openAL had issues so it be valuable to include it and have a comment on where it is misbehaving, yet the patch only includes CoreAudio.

Feedback is welcome!

My suggestion is to talk to @neXyon about whatever the issue is you are having, and by that i mean the actual actual issue and see what his thoughts are on the subject. Maybe the direction you are going is the right one, but from the information and patch given, it is impossible to tell.

feels a bit like an xyproblem

1 Like

My suggestion is to talk to @neXyon about whatever the issue is you are having, and by that i mean the actual actual issue and see what his thoughts are on the subject. Maybe the direction you are going is the right one, but from the information and patch given, it is impossible to tell.

We had a discussion on Blender chat. The reason for this is that Apple requested the audio device not to be opened directly upon launch of Blender as long as there is no playback. I didn’t get to see the original complaint from Apple though.

In my opinion it should suffice to change CoreAudio and I hope lazy initialization is enough to make Apple happy, but they might want us to also shut it down if nothing has been played back for a while?! We certainly don’t want to lazy initialize other backends, like Jack for example. There you would not be able to properly establish your audio routing setup without the device being initialized. So without any further issues being reported, I would not lazy initialize anything else.

I expect to be able to call any library’s init/exit functions at any time without having to worry about its internal state. If audaspace has issues in that department, perhaps it’s good to fix and upstream those, or report to @neXyon and have them fixed, as audaspace would be a better library for it.

Yep, there is no issue on the audaspace side here. The issue would be if (playback) handles on the blender side are not freed properly.

For additional context, this has come up before in #78649 - Blender is constantly playing an audio stream of silence and preventing the computer from entering sleep - blender - Blender Projects

One user mentioned this somehow prevented PCs from entering sleep on Windows?

Another in the same report, very bottom, mentioned that KDE will show the “speaker” icon on the taskbar since it thinks Blender is playing audio all the time (like what browser tabs sometimes do today).

Not sure if both of those still apply but this change would seem to improve the situation.

I should have mentioned, this was discussed at VFX meeting. Now I have to admit, that I have very little info about the actual problem, but what I gathered is that this is some king of hardware utilization management thing. Originally we talked about Core Audio backend, but I wanted to see if this could be generalized.

I think managing sound backends should be responsibility of audio library, but I have looked into possibility of doing this from Blender codebase.

As far as issues with OpenAL backend, It has failed to free it’s context, with generic error code but I have not looked into more details so far.

I am not familiar with current state of audio mixing on Linux, some 15 years ago there was pretty much no mixing, so this could help in that regard. se of using ASIO backend, because it introduces latency that is needed for some workflow.

I will seek more information or perhaps the issues could be discussed here in more detail.

There are reports in out bug tracker which roots to the early backend initialization. We had reports similar to what Jesse linked for a long time.

It also affects developers: if they are not careful enough to disable Audio backend in settings and start debug session in Blender while listening to music it causes all sort of interference from the Blender side.

The Jack backend I do not care too much. Not even sure if anyone is seriously using it, as it has all sort of various issues in Blender. If it is not lazily initialized I am not fussed at all. All the rest audio backends should only be initialized when they are actually needed.

In concept, this change looks good. On macOS explicitly, Blender will be a good citizen with its management around audio, resulting in the OS being better able to manage power state, sleep state as well as sitting side by side with other audio-outputting apps.

I’m less sold on the concept, now that the problem scope is a bit more clear the problem appears to be

  • Blender starts up
  • Blender initializes audaspace
  • Audaspace goes, oh you want to do some audio? let me fire up the systems!
  • Blender proceeds not to use audio, but since the audio systems are up, the OS goes oh boy audio! these are high perf people! no power savings for you!

doing lazy initialization in audaspace just kicks the can down the road , if someone plays a single sample, then goes take a nap, we’d be right back to the original problem.

Poor audio session management appears to be a problem on the blender side of things, not convinced making audaspace responsible for it is the way to go here.

I’m not sure audio session management has to be done on the Blender side. Starting and stopping the audio device every time you do animation playback likely adds too much latency, and we can’t predict when a user will stop using audio.

So the logic might be to stop the audio device e.g. a minute after the last playback, which could be implemented on the Audaspace side too.

It’s software, no one is arguing where it could be solved, i’m confident we could get audaspace to do what we want, my objection is leaning a bit heavier on the separation of concerns here, from a purist viewpoint, should audaspace be responsible? And i’m not convinced there as the problem appears to be a blender created problem.

Do any of the OS native audio solutions offer any functionality in this area? they do not, you open the device you mange it, other cross platform solution such as portaudio? you open it you manage it!

What we’re trying to do here is to make a 3rd party library make up for our strange behavior, i mean i see why it be an appealing option since we don’t have to change our behavior, I just don’t think its … right.

That being said, I do realize this is not my call to make, so i shall now be stepping down from my soapbox, carry on :slight_smile: