Real life intervenes again (parsing PDF, whee!) and I have to cut short a planned epic iDevBlogADay entry. But I do want to bang out a few quick notes on various topics of interest.

Core Audio book coverThe first is Core Audio in iOS 5, which we can now talk about publicly. If we go through the iOS 4.3 to iOS 5.0 API Differences document, we see that Audio Units accounts for a large number of diffs. This comes from the addition of a suite of units that finally make programming at the unit level a lot more interesting. Whereas we used to get a single effects unit (AUiPodEQ), we now get high and low pass filters, the varispeed unit, a distortion box, and a parametric EQ that lets us play with sliders instead of the “canned” EQ settings like “Bass Booster” and “Spoken Word”. Even more useful, we get the AUFilePlayer, meaning you can now put audio from a file at the front of an AUGraph, instead of the sheer pain of having to decode your own samples and pass them to the AUGraph through a CARingBuffer.

iOS also gets the AUSampler unit introduced in Lion, which provides a MIDI-controlled virtual instrument whose output is pitch-shifted from a source sample. This was shown off at WWDC, although providing the source to the unit by means of an .aupreset is still a dark (undocumented) art. This is the first actual MIDI audio unit in iOS, which makes the presence of Core MIDI more useful on the platform.

Core Audio giveth, but Core Audio also taketh away: iOS 5 removes (not deprecates) VoiceIOFarEndVersionInfo. This struct, and its related constants (specifically kVoiceIOFarEndAUVersion_ThirdParty), were documented as interoperating with a hypothetical “3rd-party device following open FaceTime standards”, something I took note of last May as possibly meaning that FaceTime was still ostensibly being opened up. With the removal of these APIs, I think that closes the book on Apple having any intention to live up to its vow to publish FaceTime as an open standard.

There’s lots more to talk about, but I’m already over my allotted blogging time, and work beckons. Perhaps you’d like to hear me speaking about this stuff and demo’ing it? I’m doing an all-new-for-iOS-5 Core Audio talk at two upcoming conferences:

I’ll also be doing a talk about AV Foundation capture at these conferences. And back on audio, I just heard from my editor that the last three chapters of Core Audio should be in the Rough Cut on Safari Online Books in the next week or so, although I still have some work to do to clean up bits that are broken on Lion (read the sidebar on Audio Components if you’re having a problem creating audio units with the old Component Manager calls) and to clear out forward references to stuff that didn’t end up making the final cut for the book.

2 Comments

  • 1. sbkp replies at 15th November 2011 um 6:08 pm :

    So… I’m new to this iOS development world. How does one get access to the dark art of AUSampler instrument design and the aupreset file format documentation?

    Thanks,
    Stefan

  • 2. [Time code];&hellip replies at 23rd December 2012 um 12:39 am :

    [...] a voice chat (for which we need an open protocol with many client installs in the wild… alas open FaceTime), edit and sweeten it, tag and export it, upload it to a host, and update the feed XML. On a Mac, [...]

Leave a comment

You must be logged in to post a comment.