I’m surprised how fast iOS conference slides go old. I re-used some AV Foundation material that was less than a year old at August’s CocoaConf, some of it already seemed crusty, not the least of which was a performSelectorOnMainThread: call on a slide where any decent 2011 coder would use dispatch_async() and a block. So it’s probably just as well that I did two completely new talks for last weekend’s Voices That Matter: iOS Developers Conference in Boston.

OK, long-time readers — ooh, do I have long-time readers? — know how these posts work: I do links to the slides on Slideshare and code examples on my Dropbox. Those are are at the bottom, so skip there if that’s what you need. Also, I’ve put URLs of the sample code under each of the “Demo” slides.

The AV Foundation talk is completely limited to capture, since my last few talks have gone so deep into the woods on editing (and I’m still unsatisfied with my mess of sample code on that topic that I put together for VTM:iPhone Seattle in the Spring… maybe someday I’ll have time for a do-over). I re-used an earlier “capture to file and playback” example, and the ZXing barcode reader as an example of setting up an AVCaptureVideoDataOutput, so the new thing in this talk was a straightforward face-finder using the new-to-iOS Core Image CIDetector. Apple’s WWDC has a more ambitious example of this API, so go check that out if you want to go deeper.

The Core Audio talk was the one I was most jazzed about, given that Audio Units are far more interesting in iOS 5 with the addition of music, generator, and a dozen effects units. That demo builds up an AUGraph that takes mic input, a file-player looping a drum track, and an AUSampler allowing for MIDI input from a physical keyboard (the Rock Band 3 keyboard, in fact) to play two-second synth sample that I cropped from one of the Soundtrack loops, all mixed by an AUMultichannelMixer and then fed through two effects (distortion and low pass filter) before going out to hardware through AURemoteIO. Oh, and with a simple detail view that lets you adjust the input levels into the mixer and to bypass the effects.

The process of setting up a .aupreset and getting that into an AUSampler at runtime is quite convoluted. There are lots of screenshots from AULab in the slides, but I might just shoot a screencast and post to YouTube. For now, combine WWDC 2011 session #411 with Technical Note TN2283 and you have as much a fighting chance as I did.

I’ll be doing these talks again at CocoaConf in Raleigh, NC on Dec. 1-2, with a few fix-ups and polishing. The face-finder has a stupid bug where it creates a new CIDetector on each callback from the camera, which is grievously wasteful. For the Core Audio AUGraph, I realized in the AU property docs that the mixer has pre-/post- peak/average meters, so it looks like it would be easy to add level meters to the UI. So those versions of the talks will be a little more polished. Hey, it was a tough crunch getting enough time away from client work to get the sample code done at all.

Speaking of preparation, the other thing notable about these talks is that I was able to do the slides for both talks entirely on the iPad, while on the road, using Keynote, OmniGraffle, and Textastic. Consumption-only device, my ass.


Slides and code

Sometime last week, the archives at lists.apple.com went dark, replaced by the ambiguous message:

Archives are currently disabled. If you require urgent access to the Archives, please contact postmaster@lists.apple.com with the subject “ARCHIVES”.

Apple’s developer mailing lists have been around for at least a decade, and contain a wealth of real-world knowledge, particularly on the many APIs that pre-date the establishment of Apple’s web-based Dev Forums a few years ago. While the lists themselves continue to operate and distribute new messages, the archives have been out since at least late last week.

For Core Audio, the loss of the archives is absolutely devastating. Everyone complains about the lack of documentation for Core Audio, but that’s not quite the problem. It’s that the nature of Core Audio is that the syntax is simple and the semantics are where the real work gets done. It’s very easy to create a AudioUnitSetProperty() call that compiles, and very difficult to create one that doesn’t crash and instead does something useful. That’s the good and the bad of an API whose favorite idiom is getting and setting void* properties; compare to AV Foundation, whose far more limited abilities can reasonably be gleaned by reading the docs.

When the nature of the framework is knowing what you can and can’t do with the various properties, what the audio engines will and won’t accept, and why the hell you’re getting OSStatus -50 (paramErr), the best resource you could hope for is 10 years of people doing the same thing and sharing their results. And now it’s gone.

What’s worse is that Apple, as ever, has provided far too little information for their partners to know where they stand or what to do. Are the archives down because of prolonged maintenance, like taking the tapes from Cupertino to North Carolina and hosting them there? Was there a database crash that is now being recovered from? Or was there a business or political decision to just shut down the archives indefinitely (imagine if, say, a legal rival was getting evidence against Apple from the listserv archives, or if Charlie Miller’s App Store security hack was based on information gleaned from the archives). The thing is, we don’t know whether or not to expect the archives to return, and as is so often the case, Apple isn’t being clear with us.

For now, we can complain. And file bugs against the apple.com site. Feel free to dupe mine from Open Radar.

Dennis Ritchie, a co-creator of Unix and C, passed away a few weeks ago, and was honored with many online tributes this weekend for a Dennis Ritchie Day advocated by Tim O’Reilly.

It should hardly be necessary to state the importance of Ritchie’s work. C is the #2 language in use today according to the TIOBE rankings (which, while criticized in some quarters, are at least the best system we currently have for gauging such things). In fact, TIOBE’s preface to the October 2011 rankings predicted that a slow but consistent decline in Java will likely make C the #1 language when this month’s rankings come out.

Keep in mind that C was developed between 1969 and 1973, making it nearly 40 years old. I make this point often, but I can’t help saying it again, when Paul Graham considered the possible traits of The Hundred-Year Language, the one we might be using 100 years from now, he overlooked the fact that C had already made an exceptionally good start on a century-long reign.

And yet, despite being so widely used and so important, C is widely disparaged. It is easy, and popular, and eminently tolerated, to bitch and complain about C’s primitiveness.

I’ve already had my say about this, in the PragPub article Punk Rock Languages, in which I praised C’s lack of artifice and abstraction, its directness, and its ruthlessness. I shouldn’t repeat the major points of that article — buried as they are under a somewhat affected style — so instead, let me get personal.

As an 80′s kid, my first languages were various flavors of BASIC for whatever computers the school had: first Commodore PETs, later Apple IIs. Then came Pascal for the AP CS class, as well as a variety of languages that were part of the ACSL contests (including LISP, which reminds me I should offer due respect to the recent passing of its renowned creator, John McCarthy). I had a TI-99 computer at home (hey, it’s what was on sale at K-Mart) and its BASIC was godawful slow, so I ended up learning assembly for that platform, just so I could write programs that I could stand to run.

C was the language of second-year Computer Science at Stanford, and I would come back to it throughout college for various classes (along with LISP and a ruinous misadventure in Prolog), and some Summer jobs. The funny thing is that at the time, C was considered a high-level language. At that time, abstracting away the CPU was sufficient to count as “high-level”; granted, at the time we also drew a distinction between “assembly language” and “machine language”, presumably because there was still someone somewhere without an assembler and was thus forced to provide the actual opcodes. Today, C is considered a low-level language. In my CodeMash 2010 talk on C, I postulated that a high-level language is now expected to abstract away not only the CPU, but memory as well. In Beyond Java, Bruce Tate predicted we’d never see another mainstream language that doesn’t run in a VM and offer the usual benefits of that environment, like memory protection and garbage collection, and I suspect he’s right.

But does malloc() make C “primitive”? I sure didn’t think so in 1986. In fact, it did a lot more than the languages at the time. Dynamic memory allocation was not actually common at that time — all the flavors of BASIC of that time have stack variables only, no heap. To have, say, a variable number of enemies in your BASIC game, you probably needed to do something like creating arrays to some maximum size, and use some or all of those arrays. And of course relative to assembly language, where you’re directly exposed to the CPU and RAM, C’s abstractions are profound. If you haven’t had that experience, you don’t appreciate that a = b + c involves loading b and c into CPU registers, invoking an “add” opcode, and then copying the result from a register out to memory. One line of C, many lines of assembly.

There is a great blog from a few years ago assessing the Speed, Size, and Dependability of Programming Languages. It represents the relationship between code size and performance as a 2-D plot, where an ideal language has high performance with little code, and an obsolete language demands lots of work and is still slow. These two factors are a classic trade-off, and the other two quadrants are named after the traditional categorization: slow but expressive languages are “script”, fast but wordy are “system”. Go look up gcc – it’s clearly the fastest, but its wordiness is really not that bad.

Perhaps the reason C has stuck around so long is that its bang for the buck really is historically remarkable, and unlikely to be duplicated. For all the advantages over assembly, it maintains furious performance, and the abstractions then built atop C (with the arguable exception of Java, whose primary sin is being a memory pig) sacrifice performance for expressiveness. We’ve always known this of course, but it takes a certain level of intellecutual honesty to really acknowledge how many extra CPU cycles we burn by writing code in something like Ruby or Scala. If I’m going to run that slow, I think I’d at least want to get out of curly-brace / function-call hell and adopt a different style of thinking, like LISP.

I was away from C for many years… after college, I went on a different path and wrote for a living, not coming back to programming until the late 90′s. At that point, I learned Java, building on my knowledge of C and other programming languages. But it wasn’t until the mid-2000′s that I revisted C, when I tired of the dead-end that was Java media and tried writing some JNI calls to QuickTime and QTKit (the lloyd and keaton projects). I never got very far with these, as my C was dreadfully rusty, and furthermore I didn’t understand the conventions of Apple’s C-based frameworks, such as QuickTime and Core Foundation.

It’s only in immersing myself in iOS and Mac since 2008 that I’ve really gotten good with calling C in anger again, because on these platforms, C is a first-class language. At the lower levels — including any framework with “Core” in its name — C is the only language.

And at the Core level, I’m sometimes glad to only have C. For doing something like signal processing in a Core Audio callback, handing me a void* is just fine. In the higher level media frameworks, we have to pass around samples and frame buffers and such as full-blown objects, and sometimes it feels heavier than it needs to. If you’re a Java/Swing programmer, have you ever had to deal with a big heavy BufferedImage and had to go look through the Raster object or whatever and do some conversions or lookups, when what you really want is to just get at the damn pixels already? Seems to happen a lot with media APIs written in high-level languages. I’m still not convinced that Apple’s AV Foundation is going to work out, and I gag at having to look through the docs for three different classes with 50-character names when I know I could do everything I want with QuickTime’s old GetMediaNextInterestingTime() if only it were still available to me.

C is underappreciated as an application programming language. Granted, there’s definitely a knack to writing C effectively, but it’s not just the language. Actually, it’s more the idioms of the various C libraries out there. OpenGL code is quite unlike Core Graphics / Quartz, just like OpenAL is unlike Core Audio. And that’s to say nothing of the classic BSD and other open-source libraries, some of which I still can’t crack. Much as I loathe NSXMLParser, my attempt to switch to libxml for the sake of a no-fuss DOM tree ended after about an hour. So maybe it’s always going to be a learning process.

But honestly, I don’t mind learning. In fact, it’s why I like this field. And the fact that a 40-year old language can still be so clever, so austere and elegant, and so damn fast, is something to be celebrated and appreciated.

So, thanks Dennis Ritchie. All these years later, I’m still enjoying the hell out of C.