jump to navigation

GSoC 2011 Wrap Up August 25, 2011

Posted by audaspace in General.
comments closed

Hi!

So GSoC 2011 is officially over, time for a conclusion post. Maybe you noticed already, but all I’ve coded during GSoC 2011 will be in the Blender 2.6 release coming in October this year, the merging patch is currently under review and the whole pepper GSoC branch will be in trunk in about a week. Yay!

During the last week of GSoC I documented about every audio feature blender has with some probably interesting comments, so in case you use any area of blender audio, you better read the related documentation!

Also I’d like to announce that I’ll probably do a presentation about blender audio at the Blender Conference this year!

Apart from that I’d like to update the TODO list, some of the points in there have been addressed during GSoC:

  • Multichannel support
  • Position audio objects in 3D space.
  • Render 3D audio to the animation.

So from the old TODO list only the nodal sound editing remains, but I’m questioning the sense behind such a feature, not sure there are real uses cases for it, what do you think?

But nevertheless I have some new TODO list points collected during GSoC:

  • HRTFs: This is something I’d like to have implemented during GSoC as a bonus, but unfortunately there wasn’t enough time at the end for it.
  • Reverbation: Now that blender has the basics of 3D Audio animation it would be awesome to have some ray-tracing like sound rendering. Such things already exist and even were implemented with the use of blender, but unfortunatley didn’t make it into blender itself. Anyway this task is to huge for me and should better be done by people working scientifically in this area.
  • Equalizer, Pitch Shifting, Cross Talk Cancelation and other “DAW-Features”: These are things requested by users. However as I stated elsewhere before and now I’d like to repeat this in this blog here: I don’t think blender should turn into a DAW, at least not as long as we don’t have as many audio developers as other developers (and that will most likely never happen). So this “Eierlegende Wollmilchsau” sounds nice in theory, but isn’t really practical, there are (open source) other application providing these functionalities, like Ardour, Blender audio should concentrate on Blender related tasks.
  • Lip Sync: During GSoC I implemented a better waveform display in the sequencer that can already be used nicely for manual lip sync, however there are better and partially automated workflows like other programs have them, but this feature is a GSoC project on it’s own and was the second project I applied with this year.

This were main points, here are the remaining smaller ones:

  • aud namespace: Currently the C++ library uses an “AUD_” prefix for everything, would be better to have a namespace. No influence for the user though. 😉
  • Multitrack files: some container format files have multiple audio tracks inside it (eg. different languages), would be nice if blender let you select which track to use when you load the file. This is going to come as soon as Peter Schlaile’s ffmpeg proxies are also committed to trunk (so might be in 2.6 already).
  • dB input: The sequencer had an attenuation field before where you could set the volume in dB. I removed that during GSoC, because two fields for a single value is a bit ridiculous, but with the units system blender has it would be nice to be able to enter dB in any volume field. However so far the unit system only supports linear conversions, so I have to wait for Campbell to implement callbacks so that we can add the logarithmic dB unit.
  • Streaming inputs: Like microphones, internet radio streams, python sourced sound buffers or Jack inputs are on users wishlist. However there are some problems regarding those that would have to be solved first: An audio file can be opened multiple times asynchronously and you can seek in it at any time. That’s not possible with those streams and gives real trouble. So if someone would like to tackle a design to support this types of sound sources, see further down the post.
  • Bake FFT: This is also a user request from a while ago already though. Someone told me that he’d like to have a Bake Sound to F-Curve like operator that bakes the FFT of the sound. Unfortunatley I don’t know anymore who that was and how exactly he thought this should work. In case you know, contact me!
  • Buffered Factories: That’s a concept I had in the OpenAL output device for some time, to cache sounds directly in the output device (for users with Audio Hardware that support it) for best performance. Well the use of this in blender is a bit limited, only interesting for the game engine and even there I’m pretty sure the performance increase for the very small group users that have such a thing is pretty low. However if I ever tend to implement this thing again, I have some Device Buffers concept in mind to replace the original Buffered Factories.

I hope this TODO items here showed you some possibilities wher blender audio could head to in the future, BUT I’d really like to have some help, as I’m the only blender audio developer for nearly 3 years so far and I’m not really the best man for it (as did not have any audio coding experience before I started to work on blender audio and everything new I implement I have to learn first). So in case you’d like to help out, don’t hesitate to contact me, no matter if it’s here, IRC or Mail.

So appart from smaller things I fear I won’t implement much of the things above in the near future, as I have to do loads of university stuff again now and work on other things, I hope you’re not too disappointed hearing that, after the interesting TODO list.

Regards

Advertisements

3D Audio GSoC Demo Video 2: Speaker obejcts August 9, 2011

Posted by audaspace in General.
comments closed

Hi guys!

Speaker objects are working and I’ve created a demo video!

Tutorial:

Result:

Regards

3D Audio GSoC Demo Video July 29, 2011

Posted by audaspace in General, Sequencer.
comments closed

Hi guys,

I’ve recorded a video about the progress so far. It’s not too much yet from a user point of view, but a lot of internals changed.
Last but not least sorry for my English.

Enjoy:

Regards

GSoC 2011: 3D Audio July 7, 2011

Posted by audaspace in General.
comments closed

Hi!

About

I’m sorry for ignoring the blog so long, but university kept me very busy until now and I wanted to spend all my available time into coding.

So I’ve been accepted for GSoC 2011 and I’m working on 3D Audio, so you are able to expect 3D Sound objects in blender within 2 months hopefully. 🙂

Here you can read the original proposal: http://wiki.blender.org/index.php/User:NeXyon/GSoC2011/Proposals#3D_Audio_.5Baccepted.21.5D

Status

The first milestone “General improvements” has already been reached, except for required animatin system updates where I’m waiting for help from Joshua Leung.

You can read about the progress on my wiki page here: http://wiki.blender.org/index.php/User:NeXyon/GSoC2011/3D_Audio

Help!

If you want to help me, what I really need is testers! Any builds of the pepper or salad branch of at least revision 38200 contain all the changes so far. If there are any audio problems you experience, just tell me somehow (IRC, blenderartists, bug tracker, here).

Check out the buildbot http://builder.blender.org/download/ or graphicall http://graphicall.org/ for builds.

Regards!

Audaspace Python API September 9, 2010

Posted by audaspace in Game Engine, General.
comments closed

I’m happy to announce that Audaspace has a Python API now thanks to Google Summer of Code!

  • Python API for audaspace for direct access.

People using the Game Engine now don’t have to use PyGame anymore as the Audaspace Python API is somewhat superior.

For further details please see http://wiki.blender.org/index.php/User:NeXyon/GSoC2010/Audaspace

You can find a messy documentation here: http://www.blender.org/documentation/250PythonDoc/aud.html

I hope there’ll be a better formated soon.

Apart from that audaspace got a general overhaul during GSoC, also fixing some not yet found bugs.

The PulseAudio Pain January 8, 2010

Posted by audaspace in General.
comments closed

(If you’re only here to get a solution, scroll down to the heading “The solution”.)

Well, first of all: I didn’t want to do a blog post about this, but there have been so many people having a problem with PulseAudio in connection with OpenAL that I want to place a description of the problem and several ways to solve the issue here now.

What’s the problem with PulseAudio? Well, most audio developers and pro audio users know that PulseAudio actually is crap. It adds unnecessary execution layers to the audio processing with the disadvantage of getting a higher latency and unnecessary waste of CPU cycles (and as such energy and time in the end). PulseAudio is an audio server like ESD and aRts in the older days of Gnome and KDE. The reason for those sound servers  was the fact that with the old linux kernel audio drivers of OSS (Open Sound System) only one application could use sound, except you had a sound card that supported hardware mixing what users rarely had. This was also the reason for sound problems with blender versions before 2.5, they used two different audio libraries (SDL and OpenAL) at the same time and only one could get the device, so either game engine or sequencer audio didn’t work. Back to the sound servers job: They opened the device and all applications played to the sound servers which then mixed the audio and played it. Well as OSS is history now and there is ALSA in the kernel for a long time already which has software mixing, the sound servers job hasn’t to be done anymore and so you don’t need one anymore. KDE was intelligent enough to understand this and removed aRts, while the Gnome project after some time added a new sound server: PulseAudio. Some people might argue now that PulseAudio also brings new features that you normally don’t have like: You can play sounds over the network, set a volume for every application and can connect several sound cards to be presented as one. But the fact is: no typical PC users needs this! Most applications have their own volume setting. ALSA already can present more sound cards as one. And who really needs network over audio should be cute enough to set that up himself (for example by installing PulseAudio if he really needs that, for a terminal server and clients for example). Pro audio users won’t user PulseAudio too, as they will use Jack. Jack is another sound server, but has a different job. With jack you have a lot more control over the audio flow, it also supports MIDI, is realtime suitable (VERY low latency!) and provides other cool things like Jack Transport, with which you are able to start/pause/stop/seek in one application, letting all others do the same.

And now what’s the problem with blender? The problem is not in blender itself, otherwise I could have fixed the issue already for a long time. The problem is in Linux’ OpenAL library (OpenAL soft) that on systems with PulseAudio plays through Pulse. I’ve already talked to Chris (the developer of OpenAL soft, who is a really cool guy, thank you Chris for doing such a great job!) in IRC and he told me that the API of PulseAudio sucks too. He tried to fix the problems that arise with pulse in Version 1.10 and still remaining were fixed in OpenAL soft SVN, but even that one may give you trouble as it seems that it is impossible to prevent race conditions with PulseAudio.

The solution

There are several possible solutions to fix the problem:

1) [My personal recommendation] Uninstall PulseAudio. I won’t tell you any details on how this works, because every distribution has a different package system, but I’m sure you’ll quickly find a step-by-step tutorial with google for your distro.

2) Update OpenAL soft to version 1.11 or compile it yourself from GIT. This might be a bit difficult for beginners and also people who don’t like custom packages on their systems where the distribution doesn’t provide an actual package yet, but still is might be a wanted solution.

3) Configure blender to not use OpenAL. In the user preferences set SDL or None and then save the changes with Ctrl+Alt+U. You might want to restart blender, although with changing the sound device the new setting applies immediately.

Blender Audio Visualisation January 2, 2010

Posted by audaspace in General.
comments closed

As promised, here the longer news to what I’ve committed and written about Yesterday.

Now after fixing the quickly reported bugs, I’ve been able to do some example videos.

But first of all, I am happy that I can delete two of the TODO list points that were on the List, when opening the Blog:

  • Float should be the default sample format for audio samples.
  • Render sound waves to f-curves to use them in animations.

Quickly some comments to the first point: I’ve been supporting other sample formats than float, like the commonly used S16 format in the internals of audaspace, mainly for the reason that the realtime use shouldn’t suffer from a speed loss. But I found out that in the cases where that could apply, there isn’t much difference in performance and many times when you want to use effects, you have to do floating point operations anyway. So in the end it’s more likely that NOT using float might result in a performance loss, so I removed that programming overhead. Other sample formats for output devices for example are still supported though, as SDL for example only supports the U8 and S16 sample format.

Now to the promised videos. The sound to f-curve operator supports nearly the same options the soundtracker script from technoestupido does (http://technoestupido.musiclood.com/soundtracker.py) .

So as a first try I used the loop he used for his soundtracker example Video (http://www.youtube.com/watch?v=O10lWfH4TWQ) and made similar to him two cubes, where the left one has a highpass filter and the right one a lowpass and higher attack/release values:

The second video is a little more advanced, where I used the pink panther theme song I’ve made with lmms. I animated the count of an array modifier on a cube with different lowpass/highpass values in the operator to get an equalizer.

The Suzannes to the left and right are scaling up with a low pass left and a high pass right and accumulate and additive enabled. Just enjoy:

Happy new year! January 1, 2010

Posted by audaspace in General.
comments closed

I wish all blender users a happy new year 2010!

I’ve just committed a huge changeset which also includes a first implementation of the 2.5 sound to f-curve operator, more to come on that tomorrow, as I really need to go to bed now.

Good night!

Welcome December 28, 2009

Posted by audaspace in General.
comments closed

Hello and welcome to my Blender Audio Development Blog!

I’ll post the development status and updates of the blender audio system here so that you can always be up to date, however I probably won’t be able to post to often, as I better spend my spare time on developing, right?