GSoC 2011 Wrap Up August 25, 2011Posted by audaspace in General.
So GSoC 2011 is officially over, time for a conclusion post. Maybe you noticed already, but all I’ve coded during GSoC 2011 will be in the Blender 2.6 release coming in October this year, the merging patch is currently under review and the whole pepper GSoC branch will be in trunk in about a week. Yay!
During the last week of GSoC I documented about every audio feature blender has with some probably interesting comments, so in case you use any area of blender audio, you better read the related documentation!
Also I’d like to announce that I’ll probably do a presentation about blender audio at the Blender Conference this year!
Apart from that I’d like to update the TODO list, some of the points in there have been addressed during GSoC:
Multichannel support Position audio objects in 3D space. Render 3D audio to the animation.
So from the old TODO list only the nodal sound editing remains, but I’m questioning the sense behind such a feature, not sure there are real uses cases for it, what do you think?
But nevertheless I have some new TODO list points collected during GSoC:
- HRTFs: This is something I’d like to have implemented during GSoC as a bonus, but unfortunately there wasn’t enough time at the end for it.
- Reverbation: Now that blender has the basics of 3D Audio animation it would be awesome to have some ray-tracing like sound rendering. Such things already exist and even were implemented with the use of blender, but unfortunatley didn’t make it into blender itself. Anyway this task is to huge for me and should better be done by people working scientifically in this area.
- Equalizer, Pitch Shifting, Cross Talk Cancelation and other “DAW-Features”: These are things requested by users. However as I stated elsewhere before and now I’d like to repeat this in this blog here: I don’t think blender should turn into a DAW, at least not as long as we don’t have as many audio developers as other developers (and that will most likely never happen). So this “Eierlegende Wollmilchsau” sounds nice in theory, but isn’t really practical, there are (open source) other application providing these functionalities, like Ardour, Blender audio should concentrate on Blender related tasks.
- Lip Sync: During GSoC I implemented a better waveform display in the sequencer that can already be used nicely for manual lip sync, however there are better and partially automated workflows like other programs have them, but this feature is a GSoC project on it’s own and was the second project I applied with this year.
This were main points, here are the remaining smaller ones:
- aud namespace: Currently the C++ library uses an “AUD_” prefix for everything, would be better to have a namespace. No influence for the user though.
- Multitrack files: some container format files have multiple audio tracks inside it (eg. different languages), would be nice if blender let you select which track to use when you load the file. This is going to come as soon as Peter Schlaile’s ffmpeg proxies are also committed to trunk (so might be in 2.6 already).
- dB input: The sequencer had an attenuation field before where you could set the volume in dB. I removed that during GSoC, because two fields for a single value is a bit ridiculous, but with the units system blender has it would be nice to be able to enter dB in any volume field. However so far the unit system only supports linear conversions, so I have to wait for Campbell to implement callbacks so that we can add the logarithmic dB unit.
- Streaming inputs: Like microphones, internet radio streams, python sourced sound buffers or Jack inputs are on users wishlist. However there are some problems regarding those that would have to be solved first: An audio file can be opened multiple times asynchronously and you can seek in it at any time. That’s not possible with those streams and gives real trouble. So if someone would like to tackle a design to support this types of sound sources, see further down the post.
- Bake FFT: This is also a user request from a while ago already though. Someone told me that he’d like to have a Bake Sound to F-Curve like operator that bakes the FFT of the sound. Unfortunatley I don’t know anymore who that was and how exactly he thought this should work. In case you know, contact me!
- Buffered Factories: That’s a concept I had in the OpenAL output device for some time, to cache sounds directly in the output device (for users with Audio Hardware that support it) for best performance. Well the use of this in blender is a bit limited, only interesting for the game engine and even there I’m pretty sure the performance increase for the very small group users that have such a thing is pretty low. However if I ever tend to implement this thing again, I have some Device Buffers concept in mind to replace the original Buffered Factories.
I hope this TODO items here showed you some possibilities wher blender audio could head to in the future, BUT I’d really like to have some help, as I’m the only blender audio developer for nearly 3 years so far and I’m not really the best man for it (as did not have any audio coding experience before I started to work on blender audio and everything new I implement I have to learn first). So in case you’d like to help out, don’t hesitate to contact me, no matter if it’s here, IRC or Mail.
So appart from smaller things I fear I won’t implement much of the things above in the near future, as I have to do loads of university stuff again now and work on other things, I hope you’re not too disappointed hearing that, after the interesting TODO list.
3D Audio GSoC Demo Video 2: Speaker obejcts August 9, 2011Posted by audaspace in General.
Speaker objects are working and I’ve created a demo video!
3D Audio GSoC Demo Video July 29, 2011Posted by audaspace in General, Sequencer.
I’ve recorded a video about the progress so far. It’s not too much yet from a user point of view, but a lot of internals changed.
Last but not least sorry for my English.
GSoC 2011: 3D Audio July 7, 2011Posted by audaspace in General.
I’m sorry for ignoring the blog so long, but university kept me very busy until now and I wanted to spend all my available time into coding.
So I’ve been accepted for GSoC 2011 and I’m working on 3D Audio, so you are able to expect 3D Sound objects in blender within 2 months hopefully.
Here you can read the original proposal: http://wiki.blender.org/index.php/User:NeXyon/GSoC2011/Proposals#3D_Audio_.5Baccepted.21.5D
The first milestone “General improvements” has already been reached, except for required animatin system updates where I’m waiting for help from Joshua Leung.
You can read about the progress on my wiki page here: http://wiki.blender.org/index.php/User:NeXyon/GSoC2011/3D_Audio
If you want to help me, what I really need is testers! Any builds of the pepper or salad branch of at least revision 38200 contain all the changes so far. If there are any audio problems you experience, just tell me somehow (IRC, blenderartists, bug tracker, here).
Audaspace Python API September 9, 2010Posted by audaspace in Game Engine, General.
I’m happy to announce that Audaspace has a Python API now thanks to Google Summer of Code!
- Python API for audaspace for direct access.
People using the Game Engine now don’t have to use PyGame anymore as the Audaspace Python API is somewhat superior.
For further details please see http://wiki.blender.org/index.php/User:NeXyon/GSoC2010/Audaspace
You can find a messy documentation here: http://www.blender.org/documentation/250PythonDoc/aud.html
I hope there’ll be a better formated soon.
Apart from that audaspace got a general overhaul during GSoC, also fixing some not yet found bugs.
Jack Transport February 21, 2010Posted by audaspace in Jack.
Yay! It’s done! Blender 2.5 now supports the often requested feature jack transport!
What’s that? That’s an awesome thing with which you can syncronize audio applications over the pro-audio server jack. That means you can edit your animations in blender while having the audio in one of the pro-audio tools that work with jack like ardour. And when you play back, they will be perfectly synchronized.
How to use it:
Make sure you properly set up jack and have the server running. Then in blender set the output device to Jack under the System tab of the User Preferences (Ctrl + Alt + U). Last but not least you have to set the sync mode dropdown in the timeline window to AV-sync.
I wanted to do a short demonstration video, but unfortunately the recording FPS of recordmydesktop are much too low on my PC.
AV-sync February 18, 2010Posted by audaspace in Jack, Sequencer.
Got yet another one:
- Recode of the Sequencer Audio System for better Audio-Video synchronisation.
First I had a quite huge change in mind to the blender timing code: http://wiki.blender.org/index.php/User:NeXyon/TimerAPI . That would perfectly support jack transport. But talking with Ton I realized that it’s not so good to do this way.
So the final implementation was pretty quick and easy, thanks to my hard design work before starting implementation. I’m quite happy that pays off now.
Disadvantage is, that jack transport is not as easy to implement now. But I think I might still be able to do it, so jack fans: Stay tuned.
Summing up, this is an important date for blender 2.5 audio, as all audio features that were supported in 2.49b are back now! Time to celebrate!
Realtime for Jack February 11, 2010Posted by audaspace in Jack.
Yeah, the jack backend of blender is becoming useful with that done. Unfortunately the ring buffer used adds a little latency, so you actually can’t call it “realtime”, but at least you don’t get any xruns anymore, as the process callback of jack is now non-blocking and pretty fast. Moreover the latency added now won’t matter anymore when jack transport is implemented as I’ll be able to remove the latency when it is enabled.
Time to strike through the next point on the todo list:
- Realtime support.
So we’re one step close to jack transport support. Ton promissed to instruct Brecht and/or Campbell to do an API for external timers to get AV sync working. They will talk to me when that’s going to be worked on. We will then create an API that is just perfect for jack transport and that will bring us just one little step further to jack transport support.
Crossfading audio February 8, 2010Posted by audaspace in Sequencer.
Got another one:
- Crossfading python script.
Just select two overlapping sound strips in the sequencer and call the crossfade sounds operator. This will create keyframes in the volume animation that do the cross fading.
Sequencer Audio Update February 8, 2010Posted by audaspace in Sequencer.
I’ve just committed the next bigger audio update to blender 2.5. The sequencer audio should work very good now, except the audio-video sync that’s still missing, but will be tackled soon.
Moreover I’ve also done some other changes and I’m now able to remove the following points from the TODO list:
- Make it possible to mix down the audio to an audio file.
- Display the audio wave of cached sounds in the strip.
- Make the volume of audio strips be animatable.
I also messed a lot with ffmpeg during my works and am happy that I got ogg and vorbis working, as well as now supporting wav and mp3 mixdown and matroska. What I unfortunately didn’t get working yet is flac, no idea what problem ffmpeg has there.
The changes are available in builds of revision 26693 upwards.