The last few weeks have been largely spent in Core Audio, which is surely coloring my perception of the iPhone SDK. It’s interesting talking to Daniel — author of the Prags’ Cocoa book as well as my editor on their iPhone title — as he’s working with Cocoa’s high-level abstractions, like the wonderful KVC/KVO, while I’m working at an extremely low level, down in Core Audio.

There’s no question it’s colored my perception of the iPhone SDK, to have spent pretty much a month doing mostly C between the streaming media chapter for the book and a Core Audio article for someone else. Over at O’Reilly, I blogged about the suprising primacy of C for serious iPhone development, and the challenges that presents for a generation that knows only C’s cleaned-up successors, like Java and C# (to say nothing of the various scripting languages). At least one of the responses exhorted readers to grow a pair and read K&R, but the more I’ve thought about it, the more I think that may be a bad suggestion. K&R was written for the Unix systems programmer of the late 70’s and early 80’s. It doesn’t cover C99, and many of the standards of that time are surely out of date (for example, why learn 8-bit ASCII null-terminated strings, when the iPhone and Mac programmer should be using Unicode-friendly NSStrings or CFStringRefs). This is an interesting problem, one which I’ll have more to say about later…

The streaming media chapter clocks in around 35 pages. Daniel wondered if it might be too inclusive, but I think the length just comes from the nature of Core Audio: involved and verbose. The chapter really only addresses three tasks: recording with an Audio Queue, playing with an Audio Queue (which is less important now that we have AVAudioPlayer, but which is still needed for playing anything other than local files), and converting between formats. On the latter, there’s been precious little written in the public eye: Googling for ExtAudioFileCreateWithURL produces a whopping 16 unique hits. Still, there’s a risk that this chapter is too involved and of use to too few people… it’ll be in the next beta and tech review, but it might not suit the overall goals of the book. If we cut it, I’ll probably look to repurpose it somehow (maybe I can pitch the “Big Ass Mac and iPhone Media book” dream project, the one that covers Core Audio, Core Video, and QTKit).

The article goes lower than Audio Queue and Extended Audio Files, down to the RemoteIO audio unit, in order to get really low-latency audio. MIchael Tyson has a great blog on recording with RemoteIO, but for this example, I’m playing low-latency audio, by generating samples on the fly (I actually reused some sine wave code from the QuickTime for Java book, though that example wrote samples to a file whereas this one fills a 1 KB buffer for immediate playback).

Amusingly, after switching from easy-to-compute square waves to nicer sounding sine waves, I couldn’t figure out why I wasn’t getting sound… until I took out a logging statement and it started working. Presumably, the expense of the logging caused me to miss the audio unit’s deadlines.

Working at this level has me rethinking whether a media API of this richness and power could ever have worked in Java. It’s not just Sun’s material disinterest and lack of credibility in media, it’s also the fact that latency is death at the low levels that I’m working in right now, and there’s no user who would understand why their audio had pauses and dropouts because the VM needed to take a break for garbage collection. If Java ever did get serious about low-latency media, would we have to assume use of real-time Java?

I’m amazed I haven’t had more memory related crashes than I have. I felt dirty using pointer math to fill a buffer with samples, but it works, and that’s the right approach for the job and the idioms of C and Core Audio. After a month of mostly C, I think I’m getting comfortable with it again. After I struggled with this stuff a year ago, it’s getting a lot easier. When I have time, maybe I’ll start over on the web radio client and actually get it working.

Next up: finishing the low-latency article, fixing an unthinkable number of errata on the book (I haven’t looked in a while, and I dread how much I’ll need to fix), then onto AU Graph Services and mixing.

1 Comment

Leave a comment

You must be logged in to post a comment.