Posts Tagged ‘Scale’

Scale: Youtube videos – National Council for Geographic Education

Saturday, February 16th, 2013

Here is a video link explaining scale from Youtube:

Hope you all enjoy the awkward scale guy!


Tipping the scale toward “science”

Thursday, February 14th, 2013

Marceau’s sums up issues pertaining to the variability in scale including scale dependence, scale domains and scale thresholds. At the crux of the article is an illustration of “a shift in paradigm where entities, patterns and processes are considered as intrinsically linked to the particular scale at which they can be distinguished and defined” (Marceau 1999). The need in any science to be wary of the scale at which the given work is conducted or phenomenon observed is absolutely (and relatively) critical. Different phenomena occur at different scales, and significant inaccuracies in the data exist if this is not accounted for.

I have no qualms with most of Marceau’s article. However, I would like to address another little assertion the author makes in her conclusion: the shift in paradigm once more toward a “science of scale.” After our discussion a few weeks ago regarding rethinking GIS as a science, in addition to a tool, this struck me as particularly interesting. In its broadest sense, science is a body of rationally explained and testable knowledge. Understanding scale as a scientific field in this regard is difficult. I have no problem with comprehending and accepting scale as a basic property of science, but separating out scale as its own entity?

That said, accounting for all of the work involved in understanding thresholds and dependence and the role that a varying scale can play on the world is not trivial. I simply feel that whereas there are laws of physics, for instance, there is no singular body of accepted knowledge, as far as I know, surrounding scale, with the exception that scale is a property of a phenomenon that must be noted and maintained as much as possible.

– JMonterey

How to handle scale?

Tuesday, February 12th, 2013

Any discussion in the initial stages of a GIS project has an episode where people argue about what should be the exact scale at which to carry out the analysis. The paper by Danielle J. Marceau gives a great overview of the various ways in which space and scale is conceived and how scales affect the results of analysis. However, many things in nature do repeat themselves very regularly with scale. An entire field of mathematics called Fractals deals with things that are self-similar at different scales. So, a set of formulas can define them very precisely and those formulas are all that is needed to reproduce it at any scale.

So, is it accurate to say that many things in geography appear entirely different at different scales? Or does it change gradually with scale? If so, probably we can view these things as a continuous function of scale. Then it is possible that we will come up with equations that explain this gradual change.  All we would require then will be an equation to describe the process at a particular scale, and another equation to describe how the process changes with scale, and we would be able to reconstruct how the object or phenomenon will look at any required scale.

– Dipto Sarkar

Soft Boundaries, Scale and Geolibraries

Thursday, March 15th, 2012

The article by Goodchild et al (1998) mainly dealt with finding a way to figure out to what degree a footprint conceived by the users matches with one that exists in the geolibrary. The difficulty is how to include ill-defined areas into the gazetteer since their boundaries are not precise yet they hold significance in people’s lives. The author sums it up nicely by declaring “effective digital libraries will need to decouple these 2 issues of official recognition and ability to search, by making it possible for users to construct queries for both ill-defined and well defined regions, and for librarians to build catalog entries for data sets about ill-defined regions.” (207). I agree with ClimateNYC. This was the exact problem for researchers building landscape ontology and displaying features that have “gradual” boundaries such as towns, beaches, forests and mountains. Field representations seem a viable option. However, if a neighborhood, for example, have a range of “soft” boundaries, I would argue in favor of having one of the more inclusive one (so that a point that is considered only 30% to be part of Area A will also be included in the query) be taken into consideration by the gazetteer and thus giving the user the opportunity to filter through the data himself.
Hierarchical nature of space is also an interesting topic raised by the authors. Should a search for Quebec also return datasets about Montreal? In addition to listing all well- and ill-defined places, it might also be favorable to separate the datasets into relevant scales. For instance, a user querying Quebec (or even Eastern Canada) is most likely looking for datasets at smaller (cartographic) scales than someone who is querying Montreal. For instance, a search for Eastern Canada in the ADL brought me directly to Fredericton when I would be expecting the whole area between Quebec and Newfoundland. Returning data at the wrong scale would be very inappropriate.


Scale, Uncertainty, and Spatial Data Libraries

Wednesday, March 14th, 2012

In the paper published by Goodchild et al. in 1998, authors presented the definition of spatial data libraries and demonstrated how user access the information by specifying multidimensional keys. Footprint was studied in details and authors also demonstrated how to model fuzzy regions in spatial data libraries. The corresponding implementations were discussed, as well as the visualization. Finally, the goodness of fit was delineated.

I find that the fuzzy modeling is directly related to the previous topics in our class, scale and uncertainty analysis. Most of the geospatial information in the spatial data libraries is modeled with probability, which contains uncertainty. But the magnitude of the uncertainty is largely (not completely) determined by the scale, including the query scale, the segmentation scale, the data analysis scale, visualization scale and others. Therefore, fuzzy modeling may change with respect to different scale and uncertainty.

For example, if we request the spatial information about “south China” in the CHGIS digital GIS library of Harvard University, the uncertainty in the footprint “south China” will cause unexpected results. Since there is no standard interpretation of “south China”, the places that different users choose to represent “south China” maybe different from each other to large extent. Moreover, since the scale in “south China” is not clearly specified, one may choose a city, a province, or even several provinces to represent “south China”. Therefore, we can see both scale and uncertainty play pivotal role in spatial data library queries, which should be taken into consideration with the design of spatial data libraries.


Thinking About Scale

Thursday, March 1st, 2012

I agree with cyberinfrastructure and henry miller in their thinking about how scale is presented in the paper written by Dungan et. al. The authors of this paper primarily provide examples from ecology although they do discuss and provide context from other fields. I too think we must be careful in paying attention to what field we are working in when we think about the term scale.

My first introduction to the concept came from a political ecology class I took, where scale could be used outside of just its connotations in physical space and time. Scale, in this context, could be used to think about government, human communities, academic disciplines and more. Of course, political ecologists might often be more concerned with power relationships and how these relationships flow across different scales than we are in this course.

But, since we are looking at this in the context of GIS, I thought one interesting blog post that helps to make one of the same points as the authors of this article might be worth sharing (the pictures do it for me). Scale, just in a physical sense, does matter incredibly when investigating landscapes or in thinking about maps. As a human geographer, the author’s points about the sample size of scale also holds a lot of implications when thinking what is the appropriate scale to study human subjects or their communities on. As cyberinfrastructure notes, we should be mindful of how scale might adjust our methodologies or observations by paying attention to scale itself. But, I would argue that we also need to think about what discipline we are working in (and its definition or varying usages of scale) when we consider scale shifts and how it might affect our research.


Scales in spatial statistical analysis: other definitions, other fields

Thursday, March 1st, 2012

Dungan et al. (2002) are detailed and clear in presenting scales in the field of ecology. Observation scales, scales of ecological phenomena, and scales used in spatial statistical analysis are thoroughly explained, along with their limitations. The three categories that can utilize spatial scale are the studied phenomenon, “the spatial units or sampling units used to acquire information about the phenomenon, and the analysis of the data” (627). When addressing the definitions of phenomena, observations and analysis, we should note that “some of these definitions overlap one another or are ambiguous” (629). In particular, how would be go about determining explicit definitions? Given one of the examples in the article, what would be an explicit definition of grain? The article could have mentioned ways to gain a consensus on the aforementioned definitions. However, the authors do raise awareness of issues regarding the role of scale in spatial statistical analysis scale that have been ignored by the literature, and note that “resolution involves more than observation grain alone” (630).They further state ecologists wrongly utilize scale terminology when applying large scale use to large phenomena and small scale use to small phenomena, observations or analysis. Dungan et al.’s solution is to replace the word ‘scale’ with ‘extent’. Will such changes affect ecologist’s “arbitrary decisions” in their selection of sampling and analysis units? (638)

While the authors do indeed provide a balanced view of scale in spatial statistical analysis by delineating its advantages and limitations, I am curious about scale’s effect on other fields, beyond ecology. Dungan et al. mention that “many ecological attributes can be expected to average linearly…” (631). Although the linear outcome may work for ecologists, how will other fields that will have attributes that will result in non-linear outcomes? How will the data be analyzed? What will be the impact of the modifiable areal unit problem (MAUP)? Outside the field of ecology, complex networks are moving towards the direction of escaping the limitations of scale, where the generative models created aim to comprise of scale-free networks.

-henry miller

Clarify “Scale” in Different Research Domain

Thursday, March 1st, 2012

In the paper of Dungan et al. 2002, the definition of the terminology “scale” is examined in spatial research domains. They explore “scale” with the phenomenon being studied, the spatial unit or sampling unit, and data analysis. Within different research domains, they find different synonyms for “scale”, including extent, gain, resolution, lag, support and cartographic ratio. Case studies are provided to illustrate different definition of “scale” in different research topics. Modifiable Area Unit Problem (MAUP) is identified, and authors present several suggestions to avoid it.

Most of the examples in this paper come from ecology studies, so the diversity of “scale” is not fully explored. They have mentioned “scale” in remote sensing, and refer it as the synonym of “resolution”. But “resolution” in remote sensing is involved with spatial resolution, spectral resolution and temporal resolution. In image data analysis, the word “scale” is more often utilized as statistical scale, which is related to the analysis unit rather than the observational or sampling unit. For geospatial database design and implementation, the word “scale”, or “large-scale” have significantly different meaning. The large scale data do not only mean huge volume, but also heterogeneity (e.g., different spectral and spatiotemporal resolution) and complexity (e.g., data with different format, noisy rate, and distributed storage) as well. Therefore, I agree with the authors in this paper, that “scale” should be specified with respect to the context that it is used.

Different scales give us different approaches to study our targets. By changing the scale, we actually change our methodology and observation methods. Therefore, more attention should be given to “scale “itself, not the definition.


Dungan et al. and Scale

Sunday, February 26th, 2012

Dungan et al. (2002) explicitly define various terms related to scale. They offer a statistical approach to demonstrate that changes in the size of sampling or analysis units can affect detection of a phenomena.

            The authors’ emphasis that the issue of scale is one of choosing the correct unit size is an important one. As geographers, we may take this distinction for granted as we may know through procedures like georeferencing that an acceptable Root Mean Square Error depends on the map’s scale, rather than aiming for the lowest RMSE. It can be difficult to convey to people from other walks of life that the best answer is not the most precise, but that it depends. Coming from a statistical approach, this is often the case when trying to train classifiers for remote sensing. Having a finely tuned sample spectral signature can result in overfitting, or a ‘pure’ sample being non-representative of the heterogeneity of units of the same kind. This overfitting may be statistically accurate and good, but produces results that are nonsensical in reality.

            The issue of the MAUP is discussed and geostatistical methods considered. It is assumed that a full count or census of the extent is the control method and captures all significant patterns. If this is the case, I wonder if increased computational speed, data, and the general realm of data mining (or technological advances from geospatial cyberinfrastructure – parallel computing) can avoid the MAUP. This exploratory method need only consider a large extent to find all the patterns within it (arbitrary large extents are easy to choose). Does having all the data and being able to compute it all negate the need to consider appropriate sample unit sizes?