Scale – more complicated than we thought

I realised after reading the 3rd page of the article that there actually had not been an attempt to define ‘scale’. No matter, I’m guessing from the reading that it is the name of a class of objects from the ontological perspective (did I get that right?). What I disagree with though (with my own limited experience in remote sensing) is the statement that “pixel size is commonly used as an approximation to the sampling unit size” (page 3). People doing remote sensing a very aware of the limitations of the ‘resolution’ of the data they have, and know that a pixel will often contain the sample plus data about whatever is surrounding it. When trying to extract the signature of an object that is smaller than a pixel, it is unlikely that an analyst would come to the simple conclusion that all the data in the pixel it resides in represents that object.

 

The section on multivariate relationships and other problems with changing components of scale was interesting, but a little worrying. This is one of the reasons why geographers need to exist. The article says that “prior to a field of study, one should check than n provides enough power for detecting the hypothesized pattern, given the anticipated size of the [spatial lag] effect”. What do we do when n is very limited in availability anyway? It’s certainly a good idea to maximise n, but more often than not, data collection is limited by budget, time, and the number of subjects itself. There are other problems with trying to account for all the guidelines in the considerations section.
What really echoed with me was the conclusion, that “there is not one ‘problem of scale’, but many”. What does this mean for those in the general public wishing to do geographic analysis in things such as PPGIS with VGI? What we need is more explicit documentation in methods on all the components of scale. Without this, it will be difficult to comment on the accuracy of studies

 

-Peck

Comments are closed.