If you're looking for things to hack on, first see the TODO file in the source distribution.
If the GPSD project ever needs a slogan, it will be "Hide the ugliness!" GPS technology works, but is baroque, ugly and poorly documented. Our job is to know the grotty details so nobody else has to.
Our paradigm user, the one we have in mind when making design choices, is running navigational or wardriving software on a Linux-based mobile device (including both laptops and Android mobile devices). Some of our developers are actively interested in supporting GPS-with-SBC (Single-Board-Computer) hardware such as is used in balloon telemetry, marine navigation, and aviation.
These two use cases have similar issues in areas like client interfaces and power/duty-cycle management. The one place where they differ substantially is that in the SBC and Android cases we generally know in advance what devices will be connected and when. Thus, by designing for the less predictable laptop environment, we cover both. But it is not by accident that the source code can be built with support for only one single GPS type compiled in.
We have an important secondary interest in supporting network time service. Using GPSD to monitor a GPS with pulse-per-second capability is a perfectly serviceable and unbeatably cheap way to provide Stratum I time, and we expect this use case to increase in importance.
While we will support survey-grade GPSes when and if we have that hardware for testing, our focus will probably remain on inexpensive and readily-available consumer-grade GPS hardware, especially GPS mice.
The gpsd
daemon and most gpsd
tools are for
Unix-like systems since this is where they are developed and tested.
Remote access over sockets via the C library and the simple command-line
tools gpspipe
and gpxlogger
are
supported on Windows platforms, although they currently have some minor
limitations compared to the Unix versions:
gpspipe
and gpxlogger
can not be run as
daemons on Windows.gpspipe
cannot write to the serial port on Windows.The primary aim of the GPSD project is to support a simple time-and-location service for users and their geographically-aware applications.
A GPS is a device for delivering fourteen numbers: x, y, z, t, vx,
vy, vz, and error estimates for each of these seven coordinates. The
gpsd
daemon's job is to deliver these numbers to user
applications with minimum fuss. This is a "TPV" —
time-position-velocity report. A GPS is a TPV oracle.
'Minimum fuss' means that the only action the user should have to
take to enable location service is to plug in a GPS. The
gpsd
daemon, and its associated hotplug scripts or local
equivalent, is responsible for automatically configuring itself. That
includes autobauding, handshaking with the device, determining the
correct mode or protocol to use, and issuing any device-specific
initializations required.
Features (such as GPS type or serial-parameter switches) that would require the user to perform administrative actions to enable location service will be rejected. GPSes that cannot be autoconfigured will not be supported. 99% of the GPS hardware on the market in 2015 is autoconfigurable, and the design direction of GPS chipsets is such that this percentage will rise rather than fall; we deliberately choose simplicity of interface and zero administration over 100% coverage.
Here is a concrete example of how this principle applies. At least
one very low-end GPS chipset (the San Jose Navigation GM-38) does not
deliver correct checksums on the packets it ships to a host unless it
has a fix. Typically, GPSes do not have a fix when they are plugged
in, at the time gpsd
must recognize and autoconfigure the
device. Thus, supporting this chipset would require that we either
(a) disable packet integrity checking in the autoconfiguration code,
making detection of other more well-behaved devices unreliable, or (b)
add an invocation switch to disable packet integrity checking for that
chipset alone. We refuse to do either, and do not support this
chipset.
Another principal goal of the GPSD software is that it be able to demonstrate its own correctness, give technical users good tools for measuring GPS accuracy and diagnosis of GPS idiosyncrasies, and provide a test framework for gpsd-using applications.
Accordingly, we support the gpsfake
tool that
simulates a GPS using recorded or synthetic log data. We support
gpsprof
, which collects accuracy and latency statistics
on GPSes and the GPS+gpsd
combination. And we include a
comprehensive regression-test suite with the package. These tools are
not accidents, they are essential to ensure that the basic
GPS-monitoring code is not merely correct but demonstrably
correct.
We support a tool, gpsmon
, which is a low-level packet
monitor and diagnostic tool. gpsmon
is capable of tuning
some device-specific control settings such as the SiRF
static-navigation mode. A future direction of the project is to
support diagnostic monitoring and tuning for our entire range of
chipsets.
Another secondary goal of the project is to provide open-source tools for diagnostic monitoring and accuracy profiling not just of individual GPSes but of the GPS/GNSS network itself. The protocols (such as IS-GPS-200 for satellite downlink and RCTM104 for differential-GPS corrections) are notoriously poorly documented, and open-source tools for interpreting them have in the past been hard to find and only sporadically maintained.
We aim to remedy this. Our design goal is to provide lossless translators between these protocols and readable, documented text-stream formats.
We currently provide a tool for decoding RTCM104 reports on satellite health, almanacs, and pseudorange information from differential-GPS radios and reference stations. A future direction of the project is to support an RTCM104 encoder.
The project implementation languages are C and Python. The core libgpsd libraries (and the daemon, which is a thin wrapper around them) are written in C; the test and profiling tools are written in Python, with a limited amount of glue in POSIX-conformant sh. The project avoids bash-isms.
Code in other languages will, in general, be accepted only if it supplies a language binding for the libgps or libgpsd libraries that we don't already have. This restriction is an attempt to keep our long-term maintenance problem as tractable as possible.
We require C for anything that may have to run on an embedded system. Thus, the daemon and libgpsd libraries need to stay pure C. Anything that links direct to the core libraries should also be in C, because Python's alien-type facilities are still just a little too complex and painful to be a net win for our situation.
We prefer Python anywhere we aren't required to use C by technical constraints — in particular, for test/profiling/logging tools, hotplug agents, and miscellaneous scripts. Again, this is a long-term maintainability issue; there are whole classes of potential C bugs that simply don't exist in Python, and Python programs have a drastically lower line count for equivalent capability.
Shell scripts are acceptable for test and build code that only has
to run in our development and test environments, as opposed to target
or production environments. Note that shell scripts should not assume
bash
is available but rather stick to POSIX
sh
; among other benefits, this helps portability to BSD
systems. Generally code that will run in the Ubuntu/Debian
dash
can be considered safe.
Here are two related rules:
Any complexity that can be moved out of the gpsd
daemon to external test or framework code doesn't belong in the
daemon.
Any complexity that can be moved out of C and into a higher-level language (Python, in particular) doesn't belong in C.
Both rules have the same purpose: to move complexity and resource costs from the places in the codebase where we can least afford it to the places where it is most manageable and inflicts the least long-term maintenance burden.
Because Python is not used at runtime by the daemon, Python version-skew problem (such as the 2 to 3 transition) essentally never have any effect on or reveal bugs in the C code.
A significant part of the reason is that in GPSD-world the notion of "target Python" is not actually meaningful for anything but a handful of test and profiling utilities. On the very rare occasions that those have had bugs (fewer than a half-dozen in the entire project history) they have generally been due to glitches in Python's OS bindings - the least rare categories related to the socket and pty libraries.
Therefore, what version of Python code such as the regression-test framework is running under is generally unimportant if it runs at all. To minimize problems due to the ongoing Python 2 to 3 transition, follow the "polyglot" guidelines in Practical Python porting for systems programmers.
GPSD is written to a high quality standard, and has a defect rate that is remarkably low for a project of this size and complexity. Our first Coverity scan, in March 2007, flagged only 4 potential problems in 22,397 LOC — and two of those were false positives. This is three orders of magnitude cleaner than typical commercial software, and about half the defect density of the Linux kernel itself.
This did not happen by accident. We put a lot of effort into test tools and regression tests so we can avoid committing bad code. For committers, using those tests isn't just a good idea, it's the law — which is to say that if you make a habit of not using them when you should, your commit access will be yanked.
Before shipping a patch or committing to the repository, you should go through the following checklist:
Not breaking the regression tests is especially important. We rely on these to catch damaging side-effects of seemingly innocent but ill-thought-out changes, and to nail problems before they become user-visible.
If you are contributing a driver for a new GPS, please also do the following things:
There's a whole section on adding new drivers later in this document.
We like getting patches made using git format-patch
--signoff
from a repository clone, as this means we don't
have to compose a change comment and attribution. The --signoff option
of git format-patch adds a Signed-off-by: line to the end of your commit
message. It certifies that you have the rights to submit the patch under
GPSD's BSD licence (see the next section and the
Developer Certificate of
Origin).
Failing that prefer diff -u format, but diff -c is acceptable. Do not send patches in the default (-e or ed) mode, as they are too brittle.
When you send a patch, we expect you to do at least the first three of the same verification steps we require from our project committers. Doing all of them is better, and makes it far more likely your patch will be accepted.
The GPSD libraries are under the BSD license. Please do not send contributions with GPL attached!
The reason for this policy is to avoid making people nervous about linking the GPSD libraries to applications that may be under other licenses (such as MIT, BSD, AFL, etc.).
If you send a patch that adds a command-line option to the daemon, it
will almost certainly be refused. Ditto for any patch that requires
gpsd
to parse a dotfile.
One of the major objectives of this project is for
gpsd
not to require administration — under
Linux, at least. It autobauds, it does protocol discovery, and it's
activated by the hotplug system. Arranging these things involved
quite a lot of work, and we're not willing to lose the
zero-configuration property that work gained us.
Instead of adding a command-line option to support whatever feature you had in mind, try to figure out a way that the feature can autoconfigure itself by doing runtime checks. If you're not clever enough to manage that, consider whether your feature control might be implemented with an extension to the gpsd protocol or the control-socket command set.
Here are three specific reasons command-line switches are evil:
(1) Command-line switches are often a lazy programmer's way out of writing correct adaptive logic. This is why we keep rejecting requests for a baud-rate switch and a GPS type switch — the right thing is to make the packet-sniffer work better, and if we relented in our opposition the pressure to get that right would disappear. Suddenly we'd be back to end-users having to fiddle with settings the software ought to figure out for itself, which is unacceptable.
(2) Command-line switches without corresponding protocol commands pin the daemon's behavior for its entire lifespan. Why should the user have to fix a policy at startup time and never get to change his/her mind afterwards? Stupid design...
(3) The command-line switches used for a normal gpsd
startup can only be changed by modifying the hotplug script.
Requiring end-users to modify hotplug scripts (or anything else in
admin space) is a crash landing.
Don't create static variables in library or driver files; it makes them non-reentrant and hard to light. In practice, this means you shouldn't declare a static in any file that doesn't have a main() function of its own, and silly little test mains cordoned off by a preprocessor conditional don't count.
Instead, use the 'driver' union in gps_device_t and the gps_context_t storage area that's passed in common in a lot of the calls. These are intended as places to stash stuff that needs to be shared within a session, but would be thread-local if the code were running in a thread.
The best way to avoid having dynamic-memory allocation problems is
not to use malloc/free at all. The gpsd
daemon doesn't
(though the client-side code does). Thus, even the longest-running
instance can't have memory leaks. The only cost for this turned out
to be embedding a PATH_MAX-sized buffer in the gpsd.h structure.
Don't undo this by using malloc/free in a driver or anywhere else.
Please note that this restriction includes indirect callers of malloc like strdup.
It's tempting to extract parts of packets with by using a loop of the
form "for(i = 0; i < len; i += sizeof(long))
". Don't do that;
not all integer types have the same length across architectures. A long may
be 4 bytes on a 32-bit machine and 8 bytes on a 64-bit. If you mean to skip
4 bytes in a packet, then say so (or use sizeof(int32_t)).
Here's what you have to do when adding a field to a JSON report, or a new JSON report to the object set.
Make the daemon emit the new field(s).
Document it in gpsd_json.xml.
Decide whether it warrants a minor or major protocol bump. Update the appropriate macros in gpsd.h-tail. In the comment above them, add a brief description of the change.
Update the JSON parse templates in libgps_json.c. If you don't do this, clients (possibly including gpsmon) will throw an error when they trip over the new field.
There may be a unit test for parsing of the JSON object in test_json.c. If there is, it needs to be modified. If not, add one.
The hard part: the data wants to go somewhere on the client side, but doing this may involve a compatibility break in the user-visible core structures. You need to make a call on whether to extend them or accept that the new data will be invisible until the next structure upgrade and if so document that as an item in TODO.
The C code is written with a large subset of C99. You are
encouraged to use bool
, designated initializers,
inline
and restrict
. Do not assume
that type specifiers will default to int.
You are allowed, but not required, to use C99 // comments.
Though C99 allows it, do not intermix declarations with executable statements within a block, that's too hard to read. Do, however, declare variables at the front of the smallest enclosing block.
The code does not use C99 variadic macros, flexible arrays, type-generic math, or type complex. Think carefully before introducing these and don't do it without discussion on the development list first.
The one non-C99 feature we allow is anonymous unions.
Do not use GCC extensions, as the code should compile with any sane
C99 compiler (specifically including clang). Do not use
glibc-specific features, as gpsd
is portable to multiple
operating systems that do not use glibc, including BSD's libc and
uClibc. Instead, rely on what POSIX guarantees.
We are pretty laissez-faire about indent style and low-level C formatting. Be aware that if your patch seems disharmonious with what is around it, your code may be reformatted, so try to blend in.
You may rely on a full POSIX (in particular, POSIX-2001.1) and Single Unix Standard binding to the operating system. Someday it is possible we may have a Windows port, but we refuse to cripple or overcomplicate the codebase in advance.
Do not litter the code with undocumented magic numbers! This is especially important for buffer lengths, because unsynchronized changes to buffer lengths and the code that uses them can result in overruns and all manner of nastiness. If a magic number occurs once, a comment near the point of occurrence should explain why it has the value it does. If it occurs multiple times, make it the value of a macro that is defined in one place so that all instances can be changed sanely.
gpsd - I'm speaking of the daemon itself, not the clients and test tools - runs in a lot of contexts where it provides a life-critical service. We can't allow it to faint every time it gets a case of the vapors. Instead, the right thing is to log a problem and soldier on. If the fault condition is in the logging machinery itself, the right thing is to just soldier on.
Accordingly, there are very few asserts in the core code. Whenever possible we should avoid adding more.
Here's the policy for the daemon and its service libraries. This doesn't apply to client code and tools, which can be more relaxed because the cost of having them go tits-up is lower and recovery is easier.
Use asserts sparingly.
One valid use is to pass hints to static analyzers. This sort of assert should be guarded with, e.g., #ifdef __COVERITY_ so we get the benefit of asserting the invariant without the risk of the assertion failing in production.
Another use is to document should-never-happen invariants of the code. Write this sort only if you are confident it will never fire in a production release; it's there to catch when changes in a *development* revision break an invariant.
Outside the above two cases, do not assert when you can log an internal error and recover.
Never use assert() to check resource exhaustion conditions or other dynamic errors.
Style point: avoid using git hashes in change comments. They'll become at best a pain in the ass and at worst meaningless if the project ever has to change version control systems again. Better to refer by committer and date.
scons debug=yes
will build with the right flags to
enable symbolic debugging.
There is a script called logextract
in the devtools
directory of the source distribution that you can use to strip clean
NMEA out of the log files produced by gpsd
. This can be
useful if someone ships you a log that they allege caused
gpsd
to misbehave.
gpsfake
enables you to repeatedly feed a packet
sequence to a gpsd
instance running as non-root.
Watching such a session with gdb(1)
should smoke out any
repeatable bug pretty quickly.
Almost all the C programs have a -D option that enables logging of progress messages to standard error. In gpsd itself, this ups the syslogging level if it is running in background; see the LOG_* defines in gpsd.h to get an idea of what the log levels do. Most of the test clients accept this switch to enable progress message from the libgps code; you can use it, for example, to watch what the client-side parser for the wire protocol is actually doing.
There is a timing policy flag in the WATCH command that will cause it to emit timing information at the end of every reporting cycle from the sensor.
The profiling code relies on the GPS repeatedly spinning through a three-phase cycle:
gpsmon
. At higher speeds, or
with (slightly) more compact binary protocols, quiet time increases.As long as we avoid profiling at 4800bps, then, we can identify the start of a reporting cycle (SOR) when data becomes available after quiet time of 250msec or greater.
Here is a diagram of one second in the life of a typical GPS. The relative horizontal lengths of the boxes have roughly the proportions of actual timing behavior on an NMEA or SiRF-binary GPS at 19200bps, except that on this scale the typical gpsd-to-client transmission time would be nearly too small to see.
While transmission is going on, gpsd
is reading the
sentence burst and interpreting it. At some point in the cycle
('RTIME' in the above diagram) gpsd
will recognize that
it has enough data to ship a report to clients. This might be just
after the last packet in the cycle has arrived and been analyzed, but
could be earlier if the device emits packets late in the cycle that
gpsd
doesn't use.
Here are the extra timing attributes. They measure components of the latency between the GPS' time measurement and when the sentence data became available to the client. For these latency timings to be meaningful, the GPS has to ship timestamps with sub-second precision. SiRF-II and Evermore chipsets ship time with 0.01 resolution; iTalk and Navcom chipsets ship times with 0.001 resolution.
sor:
SOR as a float quantity in seconds of
Unix time.chars:
Total characters transmitted between SOC
and RTIME. Note; this could be fewer than the number of characters
in the entire cycle; the packet-sniffer might assemble later packets
which are not analyzed by the driver.sats:
Number of satellites used in the cycle's fix.rtime:
RTIME as a float quantity in seconds of
Unix time, when the daemon created the TPV for transmission.Note: You cannot assume that the entire SOC to RTIME interval is processing time! SOC is the GPS's report of its fix time, while RTIME is taken from your system clock. Some of the delta may be drift between your ntp-corrected system clock and the GPS's atomic-clock time.
The distribution includes a Python script, gpsprof
,
that uses the timing support to collect profiling information from a
running GPS instance. The gpsprof
script creates latency
plots using gnuplot(1)
. It can also report the raw
data.
The gpsd code is well-tested on IA32 and x86-64 (amd64) chips, also on ARM, PPCs, and MIPS. Thus, it's known to work on mainstream chips of either 32 or 64 bits and either big-endian or little-endian representation with IEE754 floating point.
Handling of NMEA devices should not be sensitive to the machine's internal numeric representations, however, because the binary-protocol drivers have to mine bytes out of the incoming packets and mung them into fixed-width integer quantities, there could potentially be issues on weird machines. The regression test should spot these.
If you are porting to a true 16-bit machine, or something else with an unusual set of data type widths, take a look at bits.h. We've tried to collect all the architecture dependencies here.
There are two useful ways to think about the GPSD architecture. One is in terms of the layering of the software, the other is in terms of the normal flow of information through it.
The gpsd
breaks naturally into four pieces: the
drivers, the packet sniffer, the core library, and
the multiplexer. We'll describe these from the bottom up.
The drivers are essentially user-space device drivers for each kind of chipset we support. The key entry points are methods to parse a data packet into time-position-velocity or status information, change its mode or baud rate, probe for device subtype, etc. See Driver Architecture for more details about them.
The packet sniffer is responsible for mining data packets out of serial input streams. It's basically a state machine that's watching for anything that looks like a properly checksummed packet. Because devices can hotplug or change modes, the type of packet that will come up the wire from a serial or USB port isn't necessarily fixed forever by the first one recognized.
The core library manages a session with a GPS device. The key entry points are (a) Starting a session by opening the device and reading data from it, hunting through baud rates and parity/stopbit combinations until the packet sniffer achieves synchronization lock with a known packet type, (b) polling the device for a packet, and (b) closing the device and wrapping up the session.
A key feature of the core library is that it's responsible for switching each GPS connection to using the correct device driver depending on the packet type that the sniffer returns. This is not configured in advance and may change over time, notably if the device switches between different reporting protocols (most chipsets support NMEA and one or more vendor binary protocols, and devices like AIS receivers may report packets in two different protocols on the same wire).
Finally, the multiplexer is the part of the daemon that handles client sessions and device assignment. It is responsible for passing TPV reports up to clients, accepting client commands, and responding to hotplug notifications. It is essentially all contained in the gpsd.c source file.
The first three components (other than the multiplexer) are linked
together in a library called libgpsd and can be used separately from
the multiplexer. Our other tools that talk to GPSes directly, such as
gpsmon
and gpsctl
, do it by calling into the
core library and driver layer directly.
Under some circumstances, the packet sniffer by itself is
separately useful. gpscat
uses it without the rest of the
lower layer in order to detect and report packet boundaries in raw
data. So does gpsfake
, in order to chunk log files so they
can be fed to a test instance of the daemon packet-by-packet with
something approximating realistic timing.
Essentially, gpsd
spins in a loop polling for input from
one of these sources:
The daemon only connects to a GPS when clients are connected to it. Otherwise all GPS devices are closed and the daemon is quiescent.
All writes to client sockets go through throttled_write(). This code addresses two cases. First, client has dropped the connection. Second, client is connected but not picking up data and our buffers are backing up. If we let this continue, the write buffers will fill and the effect will be denial-of-service to clients that are better behaved.
Our strategy is brutally simple and takes advantage of the fact that GPS data has a short shelf life. If the client doesn't pick it up within a few minutes, it's probably not useful to that client. So if data is backing up to a client, drop that client. That's why we set the client socket to non-blocking.
For similar reasons, we don't try to recover from short writes to the GPS, e.g. of DGPS corrections. They're packetized, so the device will ignore a fragment, and there will generally be another correction coming along shortly. Fixing this would require one of two strategies:
Buffer any data not shipped by a short write for retransmission. Would require us to use malloc and just be begging for memory leaks.
Block till select indicates the hardware or lower layer is read for write. Could introduce arbitrary delays for time-sensitive data.
So far, as of early 2009, we've only seen short writes on Bluetooth devices under Linux. It is not clear whether this is a problem with the Linux Bluetooth driver (it could be failing to coalesce and buffer adjacent writes properly) or with the underlying hardware (Bluetooth devices tend to be cheaply made and dodgy in other respects as well).
GPS input updates an internal data structure which has slots in it for all the data you can get from a GPS. Client commands mine that structure and ship reports up the socket to the client. DGPS data is passed through, raw, to the GPS.
The trickiest part of the code is the handling of input sources in gpsd.c itself. It had to tolerate clients connecting and disconnecting at random times, and the GPS being unplugged and replugged, without leaking file descriptors; also arrange for the GPS to be open when and only when clients are active.
The special control socket is primarily there to be used by
hotplug facilities like Linux udev. It is intended to be written
to by scripts activated when a relevant device (basically, a USB
device with one of a particular set of vendor IDs) is connected to
or disconnected from the system. On receipt of these messages,
gpsd
may add a device to its pool, or remove one and
(if possible) shift clients to a different one.
The reason these scripts have to look for vendor IDs is that USB has no GPS class. Thus, GPSes present the ID of whatever serial-to-USB converter chip they happen to be using. Fortunately there are fewer types of these in use than there are GPS chipsets; in fact, just two of them account for 80% of the USB GPS market and don't seem to be used by other consumer-grade devices.
Part of the job gpsd
does is to minimize the amount
of time attached GPSes are in a fully powered-up state. So there
is a distinction between initializing the gpsd
internal
channel block structure for managing a GPS device (which we do when
the hotplug system tells us it's available) and activating the device
(when a client wants data from it) which actually involves opening it
and preparing to read data from it. This is why
gpsd_init()
and gpsd_activate()
are separate
library entry points.
There is also a distinction between deactivating a device (which we
do when no users are listening to it) and finally releasing
the gpsd
channel block structure for managing the device
(which typically happens either when gpsd
terminates or
the hotplug system tells gpsd
that the device has been
disconnected). This is why gpsd_deactivate()
and
gpsd_wrap()
are separate entry points.
gpsd
has to configure some kinds of GPS devices when
it recognizes them; this is what the event_identify
and
event_configure
hooks are for. gpsd
tries
to clean up after itself, restoring settings that were changed by the
configurator method; this is done by gpsd_deactivate()
,
which fires the deactivate
event so the driver can revert
settings.
One of the design goals for gpsd
is to be as near
zero-configuration as possible. Under most circumstances, it doesn't
require either the GPS type or the serial-line parameters to connect
to it to be specified. Presently, here's roughly how the autoconfig
works. (I say "roughly" because at any given time this sequence may
leave out recently-added binary packet types.)
gpsd
grabs packets until the
sniffer sees either a well-formed and checksum-verified NMEA
packet, a well-formed and checksum-verified packet of one of the
binary protocols, or it sees one of the two special trigger strings
EARTHA or ASTRAL, or it fills a long buffer with garbage (in which
case it steps to the next baud-rate/parity/stop-bit).The outcome is that we know exactly what we're looking at, without any driver-type or baud rate options.
(The above sequence of steps may be out of date. If so, it will probably be because we have added more recognized packet types and drivers.)
To estimate errors (which we must do if the GPS isn't nice and reports them in meters with a documented confidence interval), we need to multiply an estimate of User Equivalent Range Error (UERE) by the appropriate dilution factor,
The UERE estimate is usually computed as the square root of the sum of the squares of individual error estimates from a physical model. The following is a representative physical error model for satellite range measurements:
From R.B Langley's 1997 "The GPS error budget". GPS World , Vol. 8, No. 3, pp. 51-56
Atmospheric error — ionosphere | 7.0m |
Atmospheric error — troposphere | 0.7m |
Clock and ephemeris error | 3.6m |
Receiver noise | 1.5m |
Multipath effect | 1.2m |
From Hoffmann-Wellenhof et al. (1997), "GPS: Theory and Practice", 4th Ed., Springer.
Code range noise (C/A) | 0.3m |
Code range noise (P-code) | 0.03m |
Phase range | 0.005m |
We're assuming these are 2-sigma error ranges. This needs to be checked in the sources. If they're 1-sigma the resulting UEREs need to be doubled.
Carl Carter of SiRF says: "Ionospheric error is typically corrected for at least in large part, by receivers applying the Klobuchar model using data supplied in the navigation message (subframe 4, page 18, Ionospheric and UTC data). As a result, its effect is closer to that of the troposphere, amounting to the residual between real error and corrections."
"Multipath effect is dramatically variable, ranging from near 0 in good conditions (for example, our roof-mounted antenna with few if any multipath sources within any reasonable range) to hundreds of meters in tough conditions like urban canyons. Picking a number to use for that is, at any instant, a guess."
"Using Hoffman-Wellenhoff is fine, but you can't use all 3 values. You need to use one at a time, depending on what you are using for range measurements. For example, our receiver only uses the C/A code, never the P code, so the 0.03 value does not apply. But once we lock onto the carrier phase, we gradually apply that as a smoothing on our C/A code, so we gradually shift from pure C/A code to nearly pure carrier phase. Rather than applying both C/A and carrier phase, you need to determine how long we have been using the carrier smoothing and use a blend of the two."
On Carl's advice we would apply tropospheric error twice, and use the largest Wellenhof figure:
UERE = sqrt(0.7^2 + 0.7^2 + 3.6^2 + 1.5^2 + 1.2^2 + 0.3^2) = 4.1
DGPS corrects for atmospheric distortion, ephemeris error, and satellite/ receiver clock error. Thus:
UERE = sqrt(1.5^2 + 1.2^2 + 0.3^2) = 1.8
which we round up to 2 (95% confidence).
Due to multipath uncertainty, Carl says 4.1 is too low and recommends a non-DGPS UERE estimate of 8 (95% confidence). That's what we use.
The project is presently kept in a git repository. Up until mid-August 2004 (r256) it was kept in CVS, which was mechanically upconverted to Subversion. On 12 March 2010 the Subversion repository was converted to git. The external releases from the Subversion era still exist as tags.
GPSD is designed to be relatively easily extensible to support new sensor types and sentences. Still, there's a lot of detail work involved. For your patch to be accepted, you need to do enough of the steps that the maintainers won't have to spend large amounts of time cleaning up after you. Here are the things we'll be looking for.
Your first and most obvious step will be driver support. You'll need to parse the sentences coming in from your sensor and unpack them into a substructure that lives inside struct gps_data_t in gps.h.
In the most general case this might require you to write a new driver. We won't cover that case here; more usually what you'll be doing is adding support for a few new NMEA sentence types.
Usually you'll be able to model your sentence parsing on a handler for one of the existing sentence types in driver_nmea0183.c. The part of this that requires the most time and care is actually designing the structures to unpack the sentence data into.
Do not casually modify existing structures. Doing this causes an ABI break and annoys our application developers. If you must modify an existing structure, put your new members at the end of the structure rather than at the beginning or in the middle.
Be aware that bits in the data-validity mask are a scarce resource and claim as few of them as possible, no more than one per sentence type. If you are adding support for a sensor class with a particularly complex repertoire of sentences, please claim only one bit for the sensor type, then have your own variant mask or sentence-type field in the substructure for that sentence. (See as an example the way AIVDM sentences from Marine AIS sensors are handled.)
Your next step is designing and implementing the code that dumps the data from your sensor as JSON objects. In general each new sentence type will need to become a distinct JSON object - but there might be AIS-like exceptions if your sensor has particularly complex behavior.
This part is not usually complicated. The biggest design issue is choosing good names for your object classes and field names; once you've done that, writing the dump code is pretty easy.
Be aware that the JSON dialect available for report objects is restricted in a couple of ways. You cannot use the JSON null value, and arrays must be homogeneous - that is, every element of an array must have the same attribute set. These restrictions enable the JSON content to fit in C structures.
Support for subobjects and subarrays is available but coding for these on the client side is subtle and tricky. Avoid using these if possible; they'll make more work for you on the client side.
You must supply at least one packet log from the device containing typical data, to be added to our regression-test suite.
This is absolutely required. We maintain GPSD's quality by being very rigorous about regression testing. A device for which we can't test the code's correctness is a device we won't try to support.
See the FAQ material on annotating a test log for how to do this properly.
It's not enough that the daemon emits JSON objects corresponding to your sentences. Clients need to be able to handle these, and that means our client libraries need to know how to unpack the JSON into client-side versions of the data structures updated at driver level inside the daemon.
This is actually the trickiest part of adding a new sentence. For bindings other than the core C one the binding maintainers usually handle it, but you must write the C support yourself.
This will require that you learn how to tell GPSD's internal JSON parser how to unpack your objects. This will come down to composing C initializers that control the parse; look in libgps_json.c for examples of how to do this.
Once you have your JSON emitted by the daemon and parsed by libgps, you must describe it on the gpsd_json(8) page. This is required. The job isn't finished until the documentation is done.
Audit your code with cppcheck and scan-build (there are productions in the SConstruct file). Patches that are not just clean but properly annotated for static checking will give the maintainers warm fuzzy feelings and go right to the front of the queue.
Adding sentence support to the Python binding is usually trivial; if you've stuck to a simple enough design of your JSON object(s) you may not have to do any work at all. If you have any Python chops, looking into this yourself will make the maintainers happy.
Enhancing the test clients so they can display data from your new sentence type is a good thing, but not required.
Because of limitations in various GPS protocols (e.g., they were designed by fools who weren't looking past the ends of their noses) this code unavoidably includes some assumptions that will turn around and bite on various future dates.
The three specific problems are:
Because of the first problem, the receiver's notion of the year may reset to the year of the last zero week if it is cold-booted on a date after a rollover. This can have side effects:
The new 13-bit week number is only provided by the new "CNAV" data, which in turn is (or will be) available only in newly added GPS signals. Based on the carrier frequencies used, only the newest of the new signals (L1C) will be available to common civilian receivers, even with compatible hardware and firmware. This signal is unavailable from satellites earlier than Block III, which are currently (July 2016) not expected to begin to launch earlier than September 2016. Given that it takes years to launch a full constellation of satellites, it's highly unlikely that CNAV data with "operational" status will be available to common civilian receivers for some years yet.
For these reasons, GPSD needs the host computer's system clock to be accurate to within one second.
When debugging time and date issues, you may find an interactive GPS Date Calendar useful.
The hotplug interface works pretty nicely for telling gpsd which device to look at, at least on Fedora and Ubuntu Linux machines. But it's Linux-specific. OpenBSD (at least) features a hotplug daemon with similar capabilities. We ought to do the right thing there as well.
Hotplug is nice, but on Linux it appears to be a moving target. For help debugging a hotplug problem, see Udev Hotplug Troubleshooting.
Between versions 2.16 and 2.20, hotplugging was handled in the most obvious way, by allowing the F command to declare new GPS devices for gpsd to look at. Because gpsd ran as root, this had problems:
The conclusion was inescapable. Switching among and probing devices that gpsd already knows about can be an unprivileged operation, but editing gpsd's device list must be privileged. Hotplug scripts should be able to do it, but ordinary clients should not.
Adding an authentication mechanism was considered and rejected (can you say "can of big wriggly worms"?). Instead, there is a separate control channel for the daemon, only locally accessible, only recognizing "add device" and "remove device" commands.
The channel is a Unix-domain socket owned by root, so it has file-system protection bits. An intruder would need root permissions to get at it, in which case you'd have much bigger problems than a spoofed GPS.
More generally, certainly gpsd needs to treat command input as untrusted and for safety's sake should treat GPS data as untrusted too (in particular this means never assuming that either source won't try to overflow a buffer).
Daemon versions after 2.21 drop privileges after startup, setting UID to "nobody" and GID to whichever group owns the GPS device specified at startup time — or, if it doesn't exist, the system's lowest-numbered TTY device named in PROTO_TTY. It may be necessary to change PROTO_TTY in gpsd.c for non-Linux systems.
This section explains the conventions drivers for new devices should follow.
Internally, gpsd supports multiple GPS types. All are represented by driver method tables; the main loop knows nothing about the driver methods except when to call them. At any given time one driver is active; by default it's the NMEA one.
To add a new device, populate another driver structure and add it to the null-terminated array in drivers.c.
Unless your driver is a nearly trivial variant on an existing one, it should live in its own C source file named after the driver type. Add it to the libgpsd_sources name list in the SConstruct file.
The easiest way to write a driver is probably to copy the driver_proto.c file in the source distribution, change names appropriately, and write the guts of the analyzer and writer functions. Look in gpsutils.c before you do; driver helper functions live there. Also read some existing drivers for clues.
You can read an implementer's Notes On Writing A GPSD Driver.
There's a second kind of driver architecture for
gpsmon
, the real-time packet monitor and diagnostic tool.
It works from monitor-object definitions that include a pointer to the
device driver for the GPS type you want to monitor. See
monitor_proto.c for a prototype and technical details.
It is not necessary to add a driver just because your NMEA GPS wants some funky initialization string. Simply ship the string in the initializer for the default NMEA driver. Because vendor control strings live in vendor-specific namespaces (PSRF for SiRF, PGRM for Garmin, etc.) your initializing control string will almost certainly be ignored by anything not specifically watching for it.
Some mode-changing commands have time field that initializes the GPS clock. If the designers were smart, they included a control bit that allows the GPS to retain its clock value (and previous fix, if any) and for you to leave those fields empty (sometimes this is called "hot start").
If the GPS-Week/TOW fields are required, as on the Evermore chip, don't just zero them. GPSes do eventually converge on the correct time when they've tracked the code from enough satellites, but the time required for convergence is related to how far off the initial value is. Most modern receivers can cold start in 45 seconds given good reception; under suboptimal conditions this can take upwards of ten minutes. So make a point of getting the time of week right.
Drivers are invoked in one of three ways: (1) when the NMEA driver notices a trigger string associated with another driver. (2) when the packet state machine in packet.c recognizes a special packet type, or (3) when a probe function returns true during device open.
Each driver may have a trigger string that the NMEA interpreter watches for. When that string is recognized at the start of a line, the interpreter switches to its driver.
A good thing to send from the NMEA configure-event code is probe strings. These are strings which should elicit an identifying response from the GPS that you can use as a trigger string for a native-mode driver, or a response which has an identifiable binary packet type.
Don't worry about probe strings messing up GPSes they aren't meant for. In general, all GPSes have rather rigidly defined packet formats with checksums. Thus, for this probe to look legal in a different binary command set, not only would the prefix and any suffix characters have to match, but the checksum algorithm would have to be identical.
Incoming characters from the GPS device are gathered into packets by an elaborate state machine in packet.c. The purpose of this state machine is so gpsd can autobaud and recognize GPS types automatically. The other way for a driver to be invoked is for the state machine to recognize a special packet type associated with the driver. It will look through the list of drivers compiled in to find the (first) one that handles that packet type.
If you have to add a new packet type to packet.c, add tests for the type to the test_packet.c code.
Probe-detect methods are intended for drivers that don't use the packet getter because they read from a device with special kernel support. See the Garmin binary driver for an example.
Your driver should put new data from each incoming packet or sentence in the 'gpsdata' member of the GPS (fixes go in the 'newdata' member), and return a validity flag mask telling what members were updated (all float members are initially set to not-a-number. as well). There is driver-independent code that will be responsible for merging that new data into the existing fix. To assist this, the CYCLE_START_SET flag is special. Set this when the driver returns the first timestamped message containing fix data in an update cycle. (This excludes satellite-picture messages and messages about GPS status that don't contain fix data.)
Your packet parser must return field-validity mask bits (using the _SET macros in gps.h), suitable to be put in session->gpsdata.valid. The watcher-mode logic relies on these as its way of knowing what to publish. Also, you must ensure that gpsdata.fix.mode is set properly to indicate fix validity after each message; the framework code relies on this. Finally, you must set gpsdata.status to indicate when DGPS fixes are available, whether through RTCM or SBAS (eg WAAS).
Your packet parser is also responsible for setting the tag field in the gps_data_t structure. The packet getter will set the sentence-length for you; it will be raw byte length, including both payload and header/trailer bytes.
Note, also, that all the timestamps your driver puts in the session structure should be UTC (with leap-second corrections) not just Unix seconds since the epoch. The report-generator function for D does not apply a timezone offset.
gpsd drivers are expected to report position error estimates with a 95% confidence interval. A few devices (Garmins and Zodiacs) actually report error estimates. For the rest we have to compute them using an error model.
Here's a table that explains how to convert from various confidence interval units you might see in vendor documentation.
sqr(alpha) | Probability | Notation |
1.00 | 39.4% | 1-sigma or standard ellipse |
1.18 | 50.0% | Circular Error Probable (CEP) |
1.414 | 63.2% | Distance RMS (DRMS) |
2.00 | 86.5% | 2 sigma ellipse |
2.45 | 95.0% | 95% confidence level |
2.818 | 98.2% | 2DRMS |
3.00 | 98.9% | 3 sigma ellipse |
There are constants in gpsd.h for these factors.
Any time you add support for a new GPS type, you should also send us a representative log for your GPS. This will help ensure that support for your device is never broken in any gpsd release, because we will run the full regression before we ship.
The correct format for a capture file is described in the FAQ entry on reporting bugs.
See the header comment of the gpsfake.py module for more about the logfile format.
An ideal log file for regression testing would include an initial portion during which the GPS has no fix, a portion during which it has a fix but is stationary, and a portion during which it is moving.
The new protocol based on JSON (JavaScript Object Notation) shipped in 2.90.
A major virtue of JSON is its extensibility. There are lots of other things a sensor wedded to a GPS might report that don't fit the position-velocity-time model of the oldstyle O report. Depth of water. Temperature of water. Compass heading. Roll. Pitch. Yaw. We've already had requests to handle some of these for NMEA-emitting devices like magnetic compasses (which report heading via a proprietary TNTHTM sentence) and fish finders (which report water depth and temperature via NMEA DPT and MTW sentences). JSON gives a natural way to add ad-hoc fields, and we expect to exploit that in the future.
Chris Kuethe has floated the following list requests for discussion:
?almanac -> poll the almanac from the receiver ?ephemeris -> poll the ephemeris from the receiver ?assist -> load assistance data (time, position, etc) into the receiver ?raw -> get a full dump of the last measurement cycle... at least clock, doppler, pseudorange and carrier phase. ?readonly -> get/set read-only mode. no screwing up bluetooth devices ?listen -> set the daemon's bind address. added privacy for laptop users. ?port -> set the daemon's control port used by the regression tests, at least. ?debug -> set/modify the daemon's debug level, including after launch.
Things we've considered doing and rejected.
See the discussion of the buffering problem, above. The "Buffer all, report then clear on start-of-cycle" policy would introduce an unpleasant amount of latency. gpsd actually uses the "Buffer all, report on every packet, clear at start-of-cycle" policy.
Tempting — it would allow us to do gpsmon-like things with the daemon running — but a bad idea. It would make denial-of-service attacks on applications using the GPS far too easy. For example, suppose the control string were a baud-rate change?
When using gpsd as a time reference, one of the things we'd like to do is make the amount of lag in the message path from GPS to GPS as small and with as little jitter as possible, so we can correct for it with a constant offset.
A possibility we considered is to set the FIFO threshold on the serial device UART to 1 using TIOCGSERIAL/TIOCSSERIAL. This would, in effect, disable transmission buffering, increasing lag but decreasing jitter.
But it's almost certainly not worth the work. Rob Janssen reckons that at 4800bps the UART buffering can cause at most about 15msec of jitter. This is, observably, swamped by other less controllable sources of variation.
However, it turns out this is only an issue for EverMore chips. SiRF GPSes can get the offset from the PPS or subframe data; NMEA GPSes don't need it; and the other binary protocols supply it. Looks like it's not worth doing.
gpsd relies on the GPS to periodically send TPV reports to it. A few GPSes have the capability to change their cycle time so they can ship reports more often (gpsd 'c' command). These all send in some vendor-binary format; no NMEA GPS I've ever seen allows you to set a cycle time of less than a second, if only because at 4800bps, a full TPV report takes just under one second in NMEA.
But most GPSes send TPV reports once a second. At 50km/h (31mi/h) that's 13.8 meters change in position between updates, about the same as the uncertainty of position under typical conditions.
There is, however, a way to sample some GPSes at higher frequency. SiRF chips, and some others, allow you to shut down periodic notifications and poll them for TPV. At 57600bps we could poll a NMEA GPS 16 times a second, and a SiRF one maybe 18 times a second.
Alas, Chris Kuethe reports: "At least on the SiRF 2 and 3 receivers I have, you get one fix per second. I cooked up a test harness to disable as many periodic messages as possible and then poll as quickly as possible, and the receiver would not kick out more than one fix per second. Foo!"
So subsecond polling would be a blind alley for all SiRF devices and all NMEA devices. That's well over 90% of cases and renders it not worth doing.
First, defining some terms. There are three tiers of code in our tarballs. Each has a different risk profile.
Test clients and test tools: xgps
, cgps
,
gpsfake
, gpsprof
, gpsmon
, etc.
These are at low risk of breakage and are easy to eyeball-check for
correctness -- when they go wrong they tend to do so in obvious ways.
Point errors in a tool don't compromise the other tools or the
daemon.
Drivers for the individual GPS device types. Errors in a driver can be subtle and hard to detect, but they generally break support for one class of device without affecting others. Driver maintainers can test their drivers with high confidence.
Core code is shared among all device types; most notably, it includes the packet-getter state machine, the channel-management logic, and the error-modeling code. Historically these are the three most bug-prone areas in the code.
We also need to notice that there are two different kinds of devices with very different risk profiles:
A) Static-testable: These are devices like NMEA and SiRF chipsets that can be effectively simulated with a test-load file using gpsfake. We can verify these with high confidence using a mechanical regression test.
B) The problem children: Currently this includes Garmin USB and the PPS support (both serial and USB via Macx-1). In the future it must include any other devices that aren't static-testable. When the correctness of drivers and core code in handling is suspect, they have to be live-tested by someone with access to the actual device.
The goal of our release procedure is simple: prevent functional regressions. No device that worked in release N should break in release N+1. Of course we also want to prevent shipping broken core code.
For static-testable devices this is fairly easy to ensure. Now that we've fixed the problems with ill-conditioned floating-point, the regression-test suite does a pretty good job of exercising those drivers and the related core code and producing repeatable results. Accordingly, I'm fairly sure we will never again ship a release with serious breakage on NMEA or SiRF devices.
The problem children are another matter. Right now our big exposure here is Garmins, but we need to have good procedure in case we get our TnT support unbroken and for other ill-behaved devices we might encounter in the future.
Here are the new release-readiness states:
State 0 (red): There are known blocker bugs, Blocker bugs include functional regressions in core or driver code.
State 1 (blue): There are no known blocker bugs. 'scons testregress' passes, but problem-children (USB Garmin and PPS) have not been live-tested.
State 2 (yellow): There are no known blocker bugs. 'scons testregress' passes. Problem children have been live-tested. From this state, we drop back to state 1 if anyone commits a logic change to core code or the driver for a problem child. In state 2, devs with release authority (presently myself, Chris, and Gary) may ship a release candidate at any time.
State 3 (green): We've been in state 2 for 7 days. In state 3, a dev with release authority can call a freeze for release.
State 4: (freeze): No new features that could destabilize existing code. Release drops us to state 3.
When you do something that changes our release state -- in particular, when you commit a patch that touches core or a problem-child driver at state 2 -- you must notify the dev list.
Anyone notifying the list of a blocker bug drops us back to state 0.
When you announce a state change on the dev list, do it like this:
Red light: total breakage in Garmin USB, partial breakage in Garmin serial Blue light: no known blockers, cosmetic problems in xgps Yellow light: Garmins tested successfully 20 Dec 2007 Green light: I'm expecting to call freeze in about 10 days Freeze: Scheduled release date 1 Feb 2008
This is a reminder for release engineers preparing to ship a public
gpsd
version. Release requires the following steps in
the following order:
Check support requests, too, and Debian's GPSD buglist.
About 48 hours before release, announce that it's coming so people will have a day or so to get their urgent fixes in.
See the instructions near validation-list in the scons recipe.
It will need to be modified in SConstruct. Make sure the 'dev' suffix is gone.
The version number on the top line needs to be match what's in SConstruct.
Look for compiler, scan-build, and Coverity warnings and fix them.
If it doesn't pass regressions, it isn't ready to ship. Run
scons releasecheck
and watch for errors on a
representative selection of porterboxes.
Warning! Whenever the default leap second changes in gpsd.h some gpsd regression tests will break (times one second off) and the testloads will have to be rebuilt.
Point gpsmon at a GR601-W or other PPS-capable GPS and verify that PPS events are visible.
This is the revision the release will be built from.
scons release
will tag the release, make the tarball,
upload it to the hosting site, and refresh the website.
Bump the release number in SConstruct, adding a '~dev' suffix so tarball instances pulled from the repo will be clearly distinguishable from the next public release.
Go to the tracker and close all resolved bugs.
Announce the release on the announce list, and the resumption of regular commits on the dev list.