The Audio Engineering Society has put up the papers for the upcoming 140th AES Convention in Paris. You can download my paper here.
The paper was done in collaboration with the Conservatoire national supérieur de musique et de danse de Paris, who will also have some very interesting demos on a stand at the Convention.
It’s a study comparing perception of sound source stability between 1- and 5-source sound scenes for different levels of latency. I’ll be presenting it on the last day of the convention in the Immersive Audio section.
I’ll be at the Ambisonics Symposium (joint with the Auralization symposium) in Berlin this week (3rd-5th April). The program is here and it looks like it’ll be really interesting.
I’ll be presenting a paper titled “Off-Centre Localisation Performance of Ambisonics and HOA for Large and Small Loudspeaker Array Radii” on the 5th. I’ll post a link to the paper once it’s up on the TU Berlin website, which should hopefully be in the next couple of days.
If you happen to be going then please get in touch because I’d love to chat.
Today I was involved in a collaboration between a local theatre company (Tinderbox) and my department at SARC. There are a few Sonic Arts people involved and we’re working with writers to create four scenes/plays that will make use of the Sonic Lab we have here. The challenge is to avoid just doing a radio play (or acoustmatic composition) and find where theatre and sonic arts can meet. Continue reading
Late last year I was playing around with some songs I’ve recorded and doing some quick 3rd order Ambisonic mixes. It got me thinking about what I wanted to use Ambisonics for and how best to present my songs using it.
[Just as a side note, I’m talking here about “pop” music mixes, not electro-acoustic music where use of spatial audio is much more widespread.]
For example, do I want to use the full 360 degrees (or full sphere if were doing 3D) for the sounds or is it better to stick with a frontal sound stage and just use surround for ambience, which is common in 5.1 music mixes?
I’ve updated the B-format encoder. The first change was to make the GUI a bit lighter so the text is easier to read. The second change was more serious. There was a memory leak that was causing a build up in the RAM over time and, if used for long enough, a crash. This is now fixed and it seems to be working stably.
I’ll be spending this weekend updating the code for the decoding plugin. I doubt I’ll have the time to make a GUI but if I can get the code and a usable GUI ready then I’ll be sure to post them.
I spent the whole day looking for an error that caused intermittent compatibility problems with Sonar X1 Producer and, even though it’s now fixed, I’m still not really sure what was causing the problem… It seemed to be something about how the plugin was reporting its name to the host. The weird thing is that the error in Sonar was related to “receiveVstTimeInfo” and I didn’t end up changing that dealt with the time information.
Anyway, live and learn… or, live and do without learning…
So now that I’ve done the alpha version of the encoder it’s probably time I should tackle the decoder. There are already changes I want to make to the encoder but I’m going to wait and see what other feedback so I can make all the changes at once.
I’m not looking forward to the decoder because I’ll probably have to scrap my previous version and start again from scratch. It’s hard to know how to structure it, what features to include and what to leave out. I’ll probably spend a while thinking about it before I just into it.
As always, suggestions are welcome!
I’ve finished the alpha version of my updated B-format encoder. I’ve already got a list of changes I want to make but I guess that’s the nature of an alpha version of any piece of software.
This VST is made freely available but I make no guarantees it’ll work with every system (though I hope it does!). If you have a problem with it then please contact me.