Thoughts on “Spatial Data Quality”

The authors did a good job summarizing the concepts related to spatial data quality in terms of the definitions and the types and sources of error. Although I do not completely agree with the starting statement of “geospatial data are a model of reality”, I do agree that all geospatial data are imprecise, inaccurate, out of data, and incomplete” at different levels. The question for researchers is that to what degree such impreciseness, inaccuracy, outdatedness, and incompleteness should be either accepted or rejected, and how do we assess the data quality. The authors presented the concepts of internal and external quality, where the internal quality refers to the similarity between the data produced and the perfect data should have been produced, and the external quality refers to the “fitness for use” or “fitness for purpose”. I would argue that external quality should be the metric to look at. However, as the authors stated, there is very little evaluation method for external quality. I think this is because of the “non-absoluteness” and “relativeness” properties of the external quality. It seems to be that a case-by-case assessment approach is needed depending on what the “use” is. I’m curious to know if there is a generalized way of doing this. Moreover, with geospatial data coming from different sources such as VGI, crowdsourcing, sensors, etc., the uncertainties are intensified, whereas they provide more opportunities “for use”. I think coming up with ways to assess the external quality is of vital importance.

Comments are closed.