There's a wonderfully tongue-in-cheek project called the The Alliance for Code Excellence ("Building a better tomorrow — one line of code at a time.") that sells Bad Code Offset certificates. They fund open source projects to produce good code that will, in theory, offset all the bad code out there and mitigate the environmental harm it does. They've asked software authors to write essays on how their projects drive out bad code, offering $500 dollar prizes.

I sat down to write an essay about GPSD in the same vein of high drollery as the Alliance's site, then realized that GPSD actually has a serious case to make. We really do drive out bad code, in both direct and indirect ways, and we supply examples of good practice for emulation.

GPSD is a service daemon and device multiplexer that is the open-source world's basic piece of infrastructure for communicating with GPS receivers, and it's everywhere Linux is - running on PCs, on embedded systems, and on both OpenMoko and the entire line of Maemo cellphones. We're directly relied on by dozens of applications, including pyGPS, Kismet, GPSdrive, gpeGPS, position, roadmap, roadnav, navit, viking, and gaia. If you're doing anything with GPSes on an open-source operating system, GPSD is your indispensable tool.

GPSD's quality is up to the standard required when you're that ubiquitous. In March 2007 a Coverity scan turned up only two errors in over 22,000 LLOC. In more detail: it flagged only 4 potential problems, and two of those were false positives. This is three orders of magnitude cleaner than typical commercial software, and about half the defect density of the Linux kernel itself at the time.

We get, on average, about one defect report every 90 days, and there are just five on our tracker as I write. Given what we know about the size of our user base, our low rate of incoming bug reports tells us we've maintained a similar level of code quality since the Coverity audit. This hasn't happened by accident. Good practice matters, and I'll describe how we systematize ours in a bit.

First, though, I want to explain how we drive out bad code. The reporting protocols used by GPS sensors are a hideous mess — the kind of mess that tends to nucleate layers of bad code around it as programmers with insufficient domain knowledge try to compensate for the deficiencies at application level and wind up snarling themselves up in ever-nastier hairballs. Part of what GPSD does is firewall all this stuff away; we know everything about the mess so you don't have to, and we present clean data on a well-known port in a well-documented wire format. We then provide client-side service libraries that will unpack GPS reports into native C, C++, Python, or Perl structures so you don't even have to know about our wire format.

If our client applications had to deal with the back-end mess of poorly-specified NMEA 0183 and seventeen different vendor-specific binary protocols, I for dead certain guarantee that the total community bug load from GPS-related problems would go up by an order of magnitude. And I'd bet more than any of the $500 prizes the Alliance is offering on the bug count going up by two orders of magnitude.

We also try to drive out bad code indirectly in the same way we keep our defect level low — by providing an example of good practice that extends all the way up from our development habits to the zero-configuration design of the gpsd daemon.

The most important thing we do to ensure code quality is maintain a rigorous test suite. Our "make testregress" runs just shy of sixty regression and unit tests. Forty-six of those exercise the daemon's logic for recognizing and processing device reports; the remaining ten to a dozen exercise the rest of the code, all the way out to the application service libraries.

We actively collect device logs and metadata from users through this form, which we use to update a device-capability database and our collection of test logs. Almost every time a user fills out one of these, the number of devices for which we can guarantee good performance in the future goes up. Currently it's 87 devices from 39 vendors.

We also used to routinely audit our code with splint. Not many people do this, because splint is very finicky and a pain in the ass to use and requires you to litter your code with cryptic annotations. We stopped doing this because splint gave us randomly variable warnings across different Linux distributions, and has been obsolesced by newer static analyzers. But I believe accepting that discipline was the main reason the Coverity scan went so well. After hacking through the underbrush of false positives, I generally find that splint heads off about two potentially serious bugs per release cycle, averaging out to about one every 17 weeks.

We have a policy of not using C where a scripting language will do. Python is what we mostly use, but not the actual point here (though I do like it a lot and use it in preference to other scripting languages). The point is to get away from the fertile source of bugs that is memory-management in a fixed-extent language. The core daemon is written in C because it has to be; a significant part of our customer base is embedded and SBC developers who need to run lean and mean. But our test tools and some of our test clients are Python, and we're gradually working to retire as much of the C as possible from outside the daemon in favor of scripting languages.

We have copious documentation, not just of the interfaces to the code and the wire protocol but also to the internals and of our project practices. We have a Hacker's Guide to the project philosophy, design, and code internals. We have Notes on Writing a GPSD Driver by someone who did it. Because everything is documented, the project doesn't forget things even if the individual members do.

No account of good practice can leave out the human element. In the best open-source tradition, GPSD combines the benefits of a small, highly capable core group (three developers: Chris Kuethe, Gary Miller, and myself) with about a half dozen other semi-regular contributors and a halo of casual contributors numbering in the hundreds. GPSD teaches by example about the kinds of specialization that produce good code. Here is what the core group looks like...

Chris Kuethe is our GPS domain expert. He knows the devices, the mathematics of geodesy, and where all the bodies are buried in this application area to a nearly insane level of detail. I am the systems architect — I neither match Chris's depth of domain knowledge nor want to, but it's been my role to give the GPSD codebase a strong modular architecture, design and implement our test suites and tools, design and implement our wire protocols, and push autoconfiguration as far as it could go. Gary Miller is more of a generalist who owns some particularly tricky areas of the core code and device drivers, and is extremely good at detecting bad smells in other code; he backstops Chris and myself admirably.

If this sounds like a description of a classic "surgical team" organization straight out of Fred Brooks, that's because it is. Open source changes a lot of things, and the outer circle of contributors brings huge value to the GPSD project — but some things about software development never change, and the power of teams that include a domain expert, a master architect, and a bogon detector is one of them. GPSD reinforces a lesson that is old but never stale; if you want the kind of good code that improves the whole software ecology around it, that kind of human constellation is a great place to start.

Finally, we drive out a lot of potentially bad code by eliminating configuration options. The gpsd daemon is designed to autobaud and recognize GPS or AIS reporting packets on any serial or USB device that it's handed, no questions asked. And normally, at least on Linux systems, those devices are handed to it by udev when a hotplug event fires. Though arranging this took a lot of work, there are many fewer combinations of code paths in gpsd to test (and to accumulate bugs) than there would be if the daemon had the usual semi-infinite array of knobs, switches, and config files. Because client applications don't have to give users any access to those nonexistent knobs and switches, thousands of lines of application code have never had to be written either; the simplifying effects of autoconfiguration ripple through dozens of application-development groups and all the way up the software stack to the end-user.

The RFP for these essays asked software authors to explain what they'd do with a $500 prize. That's easy; we'd use it to buy test hardware. Because GPSes are wacky, idiosyncratic devices with poorly documented interfaces, testing on real hardware is vital to fully learn their quirks.