jump to navigation

GSoC 2011 Wrap Up August 25, 2011

Posted by audaspace in General.
trackback

Hi!

So GSoC 2011 is officially over, time for a conclusion post. Maybe you noticed already, but all I’ve coded during GSoC 2011 will be in the Blender 2.6 release coming in October this year, the merging patch is currently under review and the whole pepper GSoC branch will be in trunk in about a week. Yay!

During the last week of GSoC I documented about every audio feature blender has with some probably interesting comments, so in case you use any area of blender audio, you better read the related documentation!

Also I’d like to announce that I’ll probably do a presentation about blender audio at the Blender Conference this year!

Apart from that I’d like to update the TODO list, some of the points in there have been addressed during GSoC:

  • Multichannel support
  • Position audio objects in 3D space.
  • Render 3D audio to the animation.

So from the old TODO list only the nodal sound editing remains, but I’m questioning the sense behind such a feature, not sure there are real uses cases for it, what do you think?

But nevertheless I have some new TODO list points collected during GSoC:

  • HRTFs: This is something I’d like to have implemented during GSoC as a bonus, but unfortunately there wasn’t enough time at the end for it.
  • Reverbation: Now that blender has the basics of 3D Audio animation it would be awesome to have some ray-tracing like sound rendering. Such things already exist and even were implemented with the use of blender, but unfortunatley didn’t make it into blender itself. Anyway this task is to huge for me and should better be done by people working scientifically in this area.
  • Equalizer, Pitch Shifting, Cross Talk Cancelation and other “DAW-Features”: These are things requested by users. However as I stated elsewhere before and now I’d like to repeat this in this blog here: I don’t think blender should turn into a DAW, at least not as long as we don’t have as many audio developers as other developers (and that will most likely never happen). So this “Eierlegende Wollmilchsau” sounds nice in theory, but isn’t really practical, there are (open source) other application providing these functionalities, like Ardour, Blender audio should concentrate on Blender related tasks.
  • Lip Sync: During GSoC I implemented a better waveform display in the sequencer that can already be used nicely for manual lip sync, however there are better and partially automated workflows like other programs have them, but this feature is a GSoC project on it’s own and was the second project I applied with this year.

This were main points, here are the remaining smaller ones:

  • aud namespace: Currently the C++ library uses an “AUD_” prefix for everything, would be better to have a namespace. No influence for the user though. 😉
  • Multitrack files: some container format files have multiple audio tracks inside it (eg. different languages), would be nice if blender let you select which track to use when you load the file. This is going to come as soon as Peter Schlaile’s ffmpeg proxies are also committed to trunk (so might be in 2.6 already).
  • dB input: The sequencer had an attenuation field before where you could set the volume in dB. I removed that during GSoC, because two fields for a single value is a bit ridiculous, but with the units system blender has it would be nice to be able to enter dB in any volume field. However so far the unit system only supports linear conversions, so I have to wait for Campbell to implement callbacks so that we can add the logarithmic dB unit.
  • Streaming inputs: Like microphones, internet radio streams, python sourced sound buffers or Jack inputs are on users wishlist. However there are some problems regarding those that would have to be solved first: An audio file can be opened multiple times asynchronously and you can seek in it at any time. That’s not possible with those streams and gives real trouble. So if someone would like to tackle a design to support this types of sound sources, see further down the post.
  • Bake FFT: This is also a user request from a while ago already though. Someone told me that he’d like to have a Bake Sound to F-Curve like operator that bakes the FFT of the sound. Unfortunatley I don’t know anymore who that was and how exactly he thought this should work. In case you know, contact me!
  • Buffered Factories: That’s a concept I had in the OpenAL output device for some time, to cache sounds directly in the output device (for users with Audio Hardware that support it) for best performance. Well the use of this in blender is a bit limited, only interesting for the game engine and even there I’m pretty sure the performance increase for the very small group users that have such a thing is pretty low. However if I ever tend to implement this thing again, I have some Device Buffers concept in mind to replace the original Buffered Factories.

I hope this TODO items here showed you some possibilities wher blender audio could head to in the future, BUT I’d really like to have some help, as I’m the only blender audio developer for nearly 3 years so far and I’m not really the best man for it (as did not have any audio coding experience before I started to work on blender audio and everything new I implement I have to learn first). So in case you’d like to help out, don’t hesitate to contact me, no matter if it’s here, IRC or Mail.

So appart from smaller things I fear I won’t implement much of the things above in the near future, as I have to do loads of university stuff again now and work on other things, I hope you’re not too disappointed hearing that, after the interesting TODO list.

Regards

Advertisements

Comments

1. Tobiasz Karoń - August 25, 2011

Sounds ( 😉 ) very nice.

I was thinking of some wavetracer for Blender (sound reverbation simulation for 3D environments). Also some simplified tool (room size approximation to change reverb parameters) for BGE would be cool.

Nice to see not only I have such a strange idea.

Thanks, and keep up the good work!

2. mariomey - August 25, 2011

Wow! “¡Qué laburo!” (what a job!)

I am using BGE to give life a digital puppet (Pinokio 3D or PK3D) and I only use Audaspace to play video-on-texture audios. Actually, I am using PureData to process almost all the audio (connected via OSC, back and forth). It does:

– Voice detecting: to make PK3D move the mouth (it’s boolean, no phonem detection)
– Theme and other music: with automatic fades.
– Samples.
– VU: 3d objects moves, scales, rotates or whatever, according to the music (to do: spectrum analizer)
– Voice recording: working with a event recording (in a text file), in Blender, I can record an entire act.
– Later playback: using the voice recorded and the saved events, it can play the act. If I have to interact in the middle of the act, it includes a “Pause” and “Resume” button.

I am uploading WIP videos… and the next, I will talk about this. If you want, I can post the link…

mariomey - August 25, 2011

Here is the video… sorry, but it’s in spanish.

3. minchiavare - December 8, 2011

Did you see Maya has audio features from 2006? (http://www.mayasound.com/)
So, that’s the sense doing audio stuff with Blender, what about open movies which could be done entirely with Blender with no additional software?
What about video game soundtrack and audio effects?

Will you forget us?

4. minchiavare - December 9, 2011

glad to see you deleted my comment, maybe truth hurts?

5. audaspace - December 15, 2011

I didn’t delete it, I just didn’t approve it, I’m not looking for new comments every day…

No I didn’t see the Maya thing. Maybe you’d like to watch my talk on Bconf 2011 regarding your questions. Or just read blender’s audio documentation, before making assumptions.


Sorry comments are closed for this entry

%d bloggers like this: