GIScience, Geovisualization, shifts in how we view data

The article by Sarah Elwood touches on issues relating to how the evolution of spatial data has spurred new (or continue to fuel existing) questions regarding how we can begin to handle these datasets in such a way that we can analyze them and make some meaning from it. As new geovisualization technologies emerge, and as long as people continue to freely post geospatial information that can be collected, it poses a double edge sword. Information about the livelihoods of people on a micro level has never been so accessible, yet challenges we face with new innovative geovisualization technologies is what Elwood calls a conundrum of “unprecedented volumes of data and unprecedented levels of heterogeneity” (Elwood, 2009). By applying GIScience theory and research such as assessing the ontology of the data, mathematical algorithms, visual modeling techniques etc. has contributed to research regarding data integration, and heterogeneous qualitative data – issues that extends beyond new geovisualization technology.

Still, even with bigger, better, newer algorithms that can automate data integration we must recognize that categorizations in a particular datasets is context dependent. Therefore, labeling something can carry a lot of weight. What people define as a “bad neighborhood” can have a multitude of meanings (bad as in high crime, noise, a particular demographic that you do not mix well with?), which can have high social and political implications. If datasets are to be combined, then finding a proper categorization scheme for the dataset must also be thrown into the mix of challenges of data integration.  Perhaps this is where metadata can really shine through if it can provide the context of how the dataset was derived and defining the categories its chosen. I don’t know about you, but my appreciation for “data about data” has definitely grown since GEOG 201.


Comments are closed.