Atkinson and Tate: links between scale and uncertainty

In this article, the authors discuss the problems associated with re-scaling data and possible tools for addressing these problems. Re-scaling is required in order to compare data sets that are collected at different scales. I find the article extremely dense and challenging, being very heavy on statistical theory, and the examples provided to give context are themselves quite hard to understand. The article did give importance to several topics that are also important in the study of uncertainty, namely the modifiable areal unit problem (MAUP) and spatial autocorrelation. It is important to understand heterogeneity at scales that are finer than the scale of the sampling. I wonder, however (and the authors may have answered this question in language that I could not understand), how one incorporates heterogeneity at larger scales when scaling up. While I came to understand the MAUP as a product of the process of aggregating small-scale data to a larger scale and masking heterogeneity in the process, I suppose that it could be equally described as a process of dividing large-scale data to a smaller scale, except that heterogeneity must be interpolated when going from a large to a small scale. Furthermore, though interpolation, a crucial tool of re-scaling, was not prominent in my own review of literature, it is relevant to the topic of uncertainty because it involves creating data were no actual measurements were taken, so that the uncertainty is basically absolute. I’m actually not sure if interpolation can be approached from a position of error, vagueness or ambiguity. I suppose that error would be applicable because the interpolated value could be cross-referenced by samples from the field.

  • Yojo

Comments are closed.