Posts Tagged ‘geocoding’

geocode all the things

Friday, March 22nd, 2013

Goldberg, Wilson, and Knoblock (2007) note how geocoding match rates are much higher in urban areas than rural ones. The authors describe two routes for alleviating this problem: geocoding to a less precise level or including additional detail from other sources. However, both these routes result in a “cartographic confounded” dataset where accuracy degrees are a function of location. Matching this idea — where urban areas and areas that have been previously geocoded with additional information are more accurate than previously un-geocoded rural areas — with the idea that geocoding advances to the extent of technological advances and their use, we could state that eventually we’ll be able to geocode everything on Earth with good accuracy. I think of it like digital exploration — there will come a time when everything has been geocoded! Nothing left to geocode! (“Oh, you’re in geography? But the world’s been mapped already”).

More interesting to think about, and what AMac has already touched on, is the cultural differences in wayfinding and address structures. How can we geocode the yellow building past the big tree? How can we geocode description-laden indigenous landscapes with layers of history? Geocoding historical landscapes: how do we quantify the different levels of error involved when we can’t even quantify positional accuracy? These nuanced definitions of the very entities that are being geocoded pose a whole different array of problems to be addressed in the future.

-sidewalkballet

Putting Geography on the Map

Wednesday, March 20th, 2013

Roongpiboonsopit and Karimi’s 2010 comparison study of five free online geocoders is an example of an important process in weeding out poor geocoders and creating one that works accurately and reliably. Considering geocoding is at the heart of many GIS applications, especially those involving the Geoweb, spatial accuracy is key. The authors used empirical analysis to determine the “best” geocoders based on accuracy and a number of other metrics. They concluded that Google, MapPoint (Microsoft) and Yahoo! are all quite effective, while Geocoder.us and MapQuest are less so.

In thinking about geocoding and the development aspects of the geocoding process, I realized that geocoding development is much like development in other topics we’ve covered, such as LBS and AR. As all of these progress and become more accurate and lifelike, they are approaching a level of artificial intelligence that is simultaneously creepy and cool. For instance, if a geocoder like Google’s uses all of the information it already has on us, there will not be any need for formal geographic indicators like street address, coordinates, or even official place names. If the reference database were to include our own vernacular and preferences along with the official names and spatial reference algorithms, then simply typing or saying “home” would pinpoint our location on a map quickly and easily. This isn’t even about spatial accuracy anymore, but rather “mental” accuracy. Perhaps I’m getting ahead of myself, but I see the possibilities with geocoding not just in terms of developing a usable application for plotting points on a map, but also in terms of expanding how we think of space (ontologies) and how we conceptualize our environment (cognition). Integrating new tools into the pre-existing algorithms has and will continue to change how we live our lives.

– JMonterey

Geocoding Errors

Tuesday, March 19th, 2013

Goldberg et al.’s article ” From Text to Geographic Coordinates: The Current State of Geocoding” demonstrates an in-depth view of recent geocoding, its process and the errors involved in geocoding. That said, I wish that more discussed on the errors occurring in geocoding resulting from database managers “correcting” system errors in coding. In essence, when an error is “corrected” by a database manager, future geocoding tries to reconcile the changes and often leads to more error as the system tries to place other data into the “corrected” group or arrange the data to make sense next to the “correction”.

I have experienced this problem first hand when trying to geocode new points within a previous geocoding database. What happened  is that  a previous manager “corrected” different geocoding errors by manually entering data points as a new category, which conflicted with a several other previous categories within the database. Therefore, when I entered my points they were all coded as the new category, and  located in the wrong areas, since the manual fixes superseded  the software operational coding for placement when they were not equal to a known location. If I had not gone back to cross reference the points, I would never have found the geocoding discrepancies and “corrected” the points (although this may cause future errors, which were not apparent) .

In the article, the mention of E911 helping with address accuracy is an important step to reducing error, but I believe is irrelevant since technology with GPS sensors are becoming standard on most utility entry points at every address. For example, Hydro-Quebec is installing digital meters with GPS, pinging capability and wireless communications. These devices are then geocoded to a database, and therefore could provide accurate accessible location referencing that is self correcting, and thus reducing error for every address. As such, is the proliferation of intelligent technology reducing the error or adding to the complexity of geocoding?