Archive for February, 2013

Spatial data mining and spatial analysis

Friday, February 15th, 2013

I am late to post and I think everyone else has already posted lots of excellent ideas about these topics! I found the spatial data mining article very interesting. I think that statistical modeling and machine learning are two disciplines which share a lot in common and in some cases may even be redundant versions of one another. When I read papers written by computer scientists implementing machine learning with data, it seems that the goal (in this case mostly through unsupervised data mining) is to improve predictive ability, often measured by area under an ROC curve, for example. The goal of models in statistics is often to estimate (causal) effects and requires a different conceptual framework for model building and selection to avoid, for example, controlling for a variable in the causal pathway.
Additionally, many of the issues in spatial data mining / spatial statistics are mirrored as well. Correlation and dependence in space and time create problems for the traditional parameter estimators in statistics and for the traditional algorithms in classification/prediction/clustering in machine learning. It’s not enough to just consider spatial dependence, it’s also important to consider nuances of spatial data which may make goals difference – such as the authors mention below figure 3.2, where they talk about how spatial accuracy should be measured not in a binary (correct/incorrect) sense but should account for how close (spatially) the classification was. I would really like to more thoroughly understand how statistics and machine learning algorithms really align and differ. It’s clear this is a highly interdisciplinary field – we need people trained in GIS, computer science, and statistics!

-Kathryn

Yes, mining for spatial gold!

Friday, February 15th, 2013

I appreciate the title of JMonterey’s blog! Spatial data mining, as described in the article by Shekhar et al., seems exactly like extracting precious resources out of the underground or a kind of ‘homogeneous’ set of data. The article gives great examples of the type of ‘gold’ that we can get from the data mining processes and it’s applications (e.g.: crime, safety, floods,…). The article points out again to me that the techniques and methods of data mining depend on what is the application and what is the type of information that the researcher is looking for. Remember the example of data mining in the article starts from the question related to bird-nesting habits. This implies choosing the right set of data for our question and establishing where we are going to find the information we are looking for.

I’m left with the question of time… The ‘gold’ or outliers are spatial objects with non-spatial characteristics that differ from their neighbors’ characteristics. But what is a neighbor? I’m wondering where the notion of time fits in the data mining models, because two spatial neighbors could have a very distant relationship if we consider time, processes, change and interactions (recall the notion of absolute space/relative space in Marceau’s article on the problem of scale).

S_Ram

Scale

Friday, February 15th, 2013

I found myself more interested in what you guys had to say about scale than the texts! or at least more inspired…
One point that is really interesting is that we have fancy techniques to choose a right scale to study a problem but in the end, the scale problem persists. Here again I think that this is a good scientific problem because the researcher has to ask questions and analyze the tools he is using in defining his questions and his methods. This is why I disagree with Point McPolygon’s point at the end of his blog saying that we should develop the technology more rather than attempting to define what is the right scale of study. I think that revealing scale issues is part of the science, and that it is important to study this instead of merely relying on the technology, which anyway is subject to human development and thus human’s understanding of the scale issue…
Wyatt ask is we can be accountable for issues of scale. Maybe by questioning the appropriate scale to use and the appropriate way to transfer information across scale is revealing a problem that we might not completely solve but being transparent about the process is a step towards not lying with the map and accountability.
The example of  adapting to climate change brought up by Victor Manuel is really interesting on my point of view. Different strategies are implemented at different scale of governance (country, region, municipality, communities), using different types of information or different ‘granularity of information’. On top of it dynamics occur between the scale of governance and thus between information. Defining the appropriate scale of information would depend on the scale of governance that your studying. Although most importantly in this case would probably be the problem of transferring information from one scale to the other. This is a very difficult task because of the issues related to MAUP,… mentioned in the texts, but also because of the meaning that the different scale of governance gives to geospatial areas, entities, concepts, processes,…

S_Ram

Lost in the Data

Friday, February 15th, 2013

Guo and Mennis outline the emerging field of spatial data mining for the introduction to a special journal issue.  Work in this field has been prompted by the ever-increasing availability of finer and finer grained data, from an exponentially increasing number of sensors ranging from satellites to cell phones to surveillance cameras.  There has been a great interest in tapping into and making sense of these streams of “big data”, but in order to do so we must develop new ways of exploring, processing and analyzing them.  This is essentially what was alluded to in relation to geostatistics last week: our data and technology have surged way ahead of our available methodological toolbox.

One of the biggest issues with these new data sources is that they are largely unstructured: for example, they may just be a string of text such as a tweet, with some locational metadata.  In order to analyze a large number of unstructured data points, it is necessary to impose a structure via classification.  This is no easy task!  Although computer programs such as qualitative coding software packages exist and can group phrases by theme, most classification algorithms that exist necessitate a training process where the user tweaks the parameters of the classifier manually on a subset of the data.  The development of foolproof unsupervised classifiers that can not only sort unstructured data effectively, but also do so in a way that the output is of use to researchers, is a major challenge in this domain.

A key related idea to the advent of big data is the long-standing trade-offs between resolution and extent in spatial scale.  Though big data presents us with both extent and resolution of unprecedented magnitude, there still remain the limits imposed by humans’ own cognitive abilities.  Computer programs developed to make sense of big data must classify and generalize the raw input data in a way that allows geographers to effectively navigate this sea of data, rather than simply leaving us lost.

-FischbobGeo

Mining for spatial ingenuity

Friday, February 15th, 2013

The article “Spatial Data Mining, by Shashi Shekhar, explains what data mining is and how it has made great strides across various categories such as location prediction, spatial outlier detection, co-location mining, and clustering.  Data mining is finding meaningful patterns or information in data from a large data set that would otherwise have been imperceptible. This can be done in many ways, using either statistical tools or modeling, or a combination of both. The modeling usually takes a training set of data, and applies it to a testing set of data in order to build the model. One of the classic challenges of data mining is to take into account the spatial autocorrelation and spatial heterogeneity during this process.

 

The main unsolved problems of spatial data mining lie in the intersection of geospatial data and information technology. As GIScientists, this relates directly to many of the other subjects that we study. As is often the case in more modern applications of this science, we are limited more in methodology than by the technology itself. The advantage to a concept such as this is that patterns may emerge that nobody had previously considered, as opposed to doing statistical tests on hypothesized meanings of datasets. This technology opens a whole new world of possible advances in knowledge, both relating to GIScience and otherwise.

 

Pointy McPolygon

 

The article every undergraduate geographer needs to read

Friday, February 15th, 2013

As a geographer, Danielle Marceau’s article “The scale issue in social and natural sciences” is easily digestible. Familiar concepts such as the modifiable areal unit problem (MAUP) are presented in a very clear manner. The article focuses mainly on the effects of scale and aggregates on spatial inferences, and on linking sptial patterns and processes that occur across different scales.

 

Predicting and controlling for the MAUP can be very difficult, as pointed out by the authors. New technologies may be able to help us solve this problem with their advanced data acquisition and analysis,  however even though these technologies exist, conducting such a stud would be nearly impossible. So many processes are connected across varying scale, and when you make statistical inferences about specific phenomena, these inferences surely cannot account for it. We may use GIS to create multiple scale maps to run statistical tests and analyze the appropriate scales, however even in the creation of these ‘test scales’ there is inherent bias, in that we assume we know the limits of the scales of these processes.

 

Though technology has advanced, I believe this comes down to a philosophical debate about science and about space; can we attempt to identify every exchange in process across scale, or do we simply attempt to understand using what seems to be in the most intuitive and apparent scale? We may be able to use technology to improve the accuracy of our models, but only to a certain point. At this point, perhaps efforts would be better spent improving the processing capacity of the technology itself, rather than attempting to use appropriate scale for phenomena, that in the end, we don’t even know is correct.

 

Point McPolygon

 

Spatial Data Mining and Geographic Knowledge Discovery

Thursday, February 14th, 2013

Unlike some other fields in GIScience, advances in spatial data mining and geographic knowledge discovery are not only needed, but time sensitive. The rate at which data is collected and produced is accelerating with little end in sight. This is not due only to the number of observations, but the number of times an observation is made. Montreal’s public bus system, for instance, was in the dark ages until only a year or two ago. Now data is constantly collected from bus-mounted GPS units [Amyot]. At this rate we GISystems could drown in the surge of oncoming data. That is not to say that excess data is a bad thing. In a world in which one can must choose between too much and too little data, too much, I think, wins out. That doesn’t mean an excess of input is not a double edged sword.

Algorithms, data structures, and hardware limitations constrain the future of the science and must be improved upon. On the note of a double edged sword, however, it is my only guess that as these factors are improved, the incoming stream of data will only increase as well. What worries me is statements, like the one made by Guo and Mennis, “more recent research efforts have sought to develop approaches to find approximate solutions for SAR so that it can process very large data sets.” I understand that many times projects may have deadlines, researchers may have other places to be or feel obliged to not hog all the computing power. At the same time, the benefits to computing algorithms using complete likely outweigh the computing costs. Then again, the largest data set I have ever created on my own was an excel spreadsheet no bigger than 1 megabyte.

AMac

Spatial Scale Problems and Geostatistical Solutions

Thursday, February 14th, 2013

Atkinson and Tate make a good point. I only wish I could find it. Their extensive use of mathematics is daunting, but a necessary evil when understanding what goes on under the hood of ArcGIS. With no personal experience in the matter, a quick Google search yielded that Variograms are the same, if not similar, to kriging, and require significant input from the user. Correct me if I’m wrong.

GIScience has managed to produce a slew of tools that produce right answers. That is to say, there is only one possible answer. The more complex processes, like the interpolation methods outlined by Atkinson and Tate, reveal that there sometimes must be a best answer. At which point it is the responsibility of the user to justify their reasoning behind choosing 10 lags instead of 5. At which point, it becomes a case specific example.

What makes me curious is, is there a right answer? Is it possible to create a set of parameters, possibly for an arbitrary set of scales, that would optimize the up-scaling and kriging process in all fields of use?
Written in 2000, there has been more than a decade for someone to answer the question and implement it in GIScience. As of 2013, there is no right answer, but there is a significant amount of mathematics to back it up.

In an ideal world, if the research field dedicated data mining and geographic knowledge discovery is successful, there may eventually be no need for interpolation as it is replaced by the overwhelming wave of high resolution, universal, data sets.

AMac

Scale

Thursday, February 14th, 2013

There is no doubt that scale plays a large role in the way in which data is interpreted. The article by Atkinson and Tate provide a good overview of scale of measurement, scales of spatial variation, and the issues inherent to spatial data. However, if we draw from Kathryn’s seminar about spatial statistics, and realize that large scale spatial processes impact smaller scale processes and their patterns, I’m still unclear as to how rescaling of data top down, bottom up or applying geostatistical techniques  can quantify the effects of large scale processes on smaller spatial processes.

An interesting suggestion that the authors cited from Milne (1991) was that to understand heterogeneity, conduct analysis across a wide range of measurement scales and extract the parameters that remained consistent to changes in scale. Though an expensive and time consuming task, if this could be done then it seems like great way to extract these features and focus on parameters that are scale sensitive to determine the appropriate scale necessary for analysis. Also, can those parameters that are robust to change tell the researching something about the study at hand? With the intensification of geospatial data and larger datasets, developing the necessary tools to better integrate multi-scale datasets for a more comprehensive evaluation is a mountainous task.  A tough GIScience topic with no easy answer, but it is crucial that we recognize that the scale of measurement we choose and the changes to data variability once we rescale data can greatly affect the final results of the analysis.

-tranv

Spatial Data Mining

Thursday, February 14th, 2013

Spatial data mining relies on the geographic attributes of the data to uncover spatial relationships within the dataset leading to knowledge discovery.  No doubt that if implemented successfully, then it has contributed to developing spatial theories, and contributing to geographic research. It’s a science!

The authors contend that with the increasing contribution of volunteered geographic information, GPS and LBS technology provides new research directions for the field. Though there is an abundance spatial data, and borrowing from Beth’s theme of critical GIS – we have to be mindful that those that can/do contribute is not representative an entire population. Bias is inherent in obtaining data from these sources because certain groups of people have been known to contribute more, certain locales can be headlined more often, and there is a large group of individuals who are disenfranchised because they do not have access to such technologies and completely sidelined. The authors concern themselves with developing the right questions and methodologies to solicit the answers, but is the data appropriate to answer the questions being asked?

Though there are several techniques to begin the spatial data mining process, how does the user decide which technique is appropriate for their analysis? Given that the end user may not be well versed in spatial data mining and nuances that exist within different types of techniques, different results will be generated by different rule sets, classifications. Since spatial data mining is a multidisciplinary field, who should be ultimately responsible for teaching the theories and methodologies?

-tranv

Mining for spatial gold

Thursday, February 14th, 2013

Shekhar et al. describe spatial data mining—the process of finding notable patterns in spatial data—and they outline models of doing so, as well as using spatial outliers and spatial co-location rules, and locating spatial clusters. The article is mostly informative, and the topic is central to spatial analysis, so it is difficult to separate spatial mining from the rest of GIS.

I find the notion of clustering particularly interesting, since it is perhaps the most visual-oriented aspect of spatial mining, yet it is largely up for interpretation and/or dependent on the variability of clustering models. For instance, when we see a distribution of points on a map, subconsciously, we begin to see clusters, even if the data is “random.” This type of cognitive clustering is difficult, or even impossible, to model, and it might vary from person to person. The authors of this article list four categories of clustering algorithms, including hierarchical, partitional, density-based, and grid-based models, depending on the order and method of dividing the data. However, the authors fail to note the applications for the various algorithms. If we are thus to naively understand these to be interchangeable, then the results could differ tremendously. Moreover, if there are indeed patterns, then there is most likely a driving force behind those patterns. That force, not the clusters themselves, is the most important discovery in spatial mining and so the modeling must be more stringent in its pursuit of accuracy.

– JMonterey

Tipping the scale toward “science”

Thursday, February 14th, 2013

Marceau’s sums up issues pertaining to the variability in scale including scale dependence, scale domains and scale thresholds. At the crux of the article is an illustration of “a shift in paradigm where entities, patterns and processes are considered as intrinsically linked to the particular scale at which they can be distinguished and defined” (Marceau 1999). The need in any science to be wary of the scale at which the given work is conducted or phenomenon observed is absolutely (and relatively) critical. Different phenomena occur at different scales, and significant inaccuracies in the data exist if this is not accounted for.

I have no qualms with most of Marceau’s article. However, I would like to address another little assertion the author makes in her conclusion: the shift in paradigm once more toward a “science of scale.” After our discussion a few weeks ago regarding rethinking GIS as a science, in addition to a tool, this struck me as particularly interesting. In its broadest sense, science is a body of rationally explained and testable knowledge. Understanding scale as a scientific field in this regard is difficult. I have no problem with comprehending and accepting scale as a basic property of science, but separating out scale as its own entity?

That said, accounting for all of the work involved in understanding thresholds and dependence and the role that a varying scale can play on the world is not trivial. I simply feel that whereas there are laws of physics, for instance, there is no singular body of accepted knowledge, as far as I know, surrounding scale, with the exception that scale is a property of a phenomenon that must be noted and maintained as much as possible.

– JMonterey

Wednesday, February 13th, 2013

In working on my final project, I picked up a copy of “How to Lie With Maps” by Mark Monmonier at the library. I haven’t gotten too far into the book, but its concept, of the way that maps are always more complex than they look on the outside, provides a useful starting point for the discussion of scale. The article by Atkinson and Tate on scale provides an overview of some of the problems that scale brings up in our work, and proposes some ways that we may work with or around this problem. The question I would like to pose, (as it seems that on the technical/data collection side no large changes will help us solve the issue of variable scale any time soon), is how we may be accountable in our GIS work, specifically at a representational level, to problems of scaling.

To someone untrained in GIS, or unaccustomed to critical reading, a map is just a map, an abstraction of reality. For this type of viewer (and not only of maps, but I use this example because it is the most simple), how can we be transparent about what the image lacks or what data the image obscures? It is easy to lie with maps and it is easy to choose an aggregation that is advantageous to those invested in the project, but it is not so easy to make this clear to the uninformed viewer. So I ask, as I always do: Is being accountable to issues of scale in GIS possible? Is it desirable (and if so, when) ?

Wyatt

Scaling Issues

Wednesday, February 13th, 2013

Scale is an important issue in regards to most academic investigations, particularly within the framework of the investigation of natural phenomena. The biggest problem when dealing with these phenomena, especially environmental problems, is that these occurrences happen at various levels of scale. In addition, a single phenomena might have a particular effect on a local scale, but a completely different effect on a regional or global scale. As a result, issues arise as to best way to conduct an investigation: What scale should I use? Is the phenomena I am studying multi-scalar? How do I aggregate my results?

Marceau does a great job of identifying some of the key concepts behind scale, as well as the issues that have arisen since its evolution; most specifically within the natural and social sciences. One of the most interesting concepts identified throughout the paper was the modifiable are unit problem (MAUP) , which encompasses both the scale problem and the aggregation problem. Marceau concludes that the effects of MAUP are starting to become better understood; and this process in turn is contributing to the emergence of “scale as a science”.

The issues of MAUP bring to mind a case study I recently reviewed. This study encompassed an investigation of the effects of climate change on the Scandinavian country of Norway. An analysis of various effects such as economy, biodiversity, health etc. was performed at a multi-scalar level (national, regional, local). In their conclusion, the authors noted that at a national scale, the country was well off towards adapting to climate change. However, they noted that as scale was decreased to the regional and local scales, localized threats were discovered. This investigation serves to highlight the problem of main issues of MAUP, and how further development is needed within the “science of scale” in order to more effectively manage data at multi-scalar levels.

-Victor Manuel

Spatial data mining: a discovery or a re-classification of knowledge

Tuesday, February 12th, 2013

Guo and Mennis speak on how data information has increased in availability making it difficult to extract the useful data, however I believe that it not just a present day problem. To clarify, although data in many fields may have once been hard to access, some fields have had an over abundance of data for decades. For example, earth related sciences have had a variety of data sets from maps and cross sections to areal photos and digital models since the 1960s, readily available. As such, earth sciences and other spatial fields of study have been data-rich for decades with vast high resolution spatial data sets. This amount of data led to the finding data a problem even before the use of digital databases and indexing.  In light of these issues, the authors may to have not considered earth science database sets when writing or had never really looked at the amount available for earth sciences, but this is just speculation.

I do have to agree, though, that the creation of data mining techniques have made data easier and more accessible to use of none experts and experts alike in many fields, even if there has been high quality spatial data for years. On the topic of fields with pre-existing data, are we then truly discovering new information or are we just re-classifying and changing the format of data to be more accessible elaborating the problem of data retrieval today. The suggestion of a framework to how data should be manipulated, stored and retrieved would solve many issues with pairing old and new data, and retrieving the data one is seeking.

C_N_Cycles

How to handle scale?

Tuesday, February 12th, 2013

Any discussion in the initial stages of a GIS project has an episode where people argue about what should be the exact scale at which to carry out the analysis. The paper by Danielle J. Marceau gives a great overview of the various ways in which space and scale is conceived and how scales affect the results of analysis. However, many things in nature do repeat themselves very regularly with scale. An entire field of mathematics called Fractals deals with things that are self-similar at different scales. So, a set of formulas can define them very precisely and those formulas are all that is needed to reproduce it at any scale.

So, is it accurate to say that many things in geography appear entirely different at different scales? Or does it change gradually with scale? If so, probably we can view these things as a continuous function of scale. Then it is possible that we will come up with equations that explain this gradual change.  All we would require then will be an equation to describe the process at a particular scale, and another equation to describe how the process changes with scale, and we would be able to reconstruct how the object or phenomenon will look at any required scale.

– Dipto Sarkar

Whither Spatial Statistics?

Friday, February 8th, 2013

After a good half-century of quantitative developments in geography and geographic developments in statistics, Nelson takes stock of the field of spatial statistics by asking eminent spatial statisticians (statistical geopgraphers?) for their take on how the field has developed, current and future challenges and seminal works that should be on any quantitative geographer’s bookshelf.  She synthesizes the researchers’ responses to get at the broader trends characterizing spatial statistics.

A key shift discussed by Nelson is the state of advances in methodology vis-à-vis data availability and size. As spatial statistics grew in tandem with the Quantitative Revolution in the mid-20th century, geostatistical methods were in many ways ahead of the available data and technology: computers and automated data management technologies were still nascent, limiting the quantity of data that could be analyzed to what could be managed by hand or by using punchcard systems.  Meanwhile, data collection and organization was onerous and typically manual.  Today, we have the opposite problem: we have TONS of processing power to perform complex calculations, and programming languages to implement new methods easily, so much so that our technology is now ahead of most conventional methods of spatial analysis.  Of particular importance is the new problem of how to work with big data, which may provide more comprehensive samples (even data for the entire population!), in a finer temporal resolution and a richer detail than ever before.

Rising up to the challenges presented by big data and stagnant methods will be paramount for the continued relevance of spatial statistics into the future.  However, today’s cohort of geography students may be falling behind the curve in their technical ability to respond to these challenges.  Mathematics and computer science, absolutely crucial to working in advanced spatial statistics, are receiving less and less of a focus in our Geography departments (though this is starting to change with computer science, specifically in relation to GIS curricula).  Indeed, the very core of quantitative methodology in geography has been shaken by the Cultural Turn.  While qualitative approaches are doubtlessly important to a rich understanding of geographical processes, there is a risk that geographers will lose their quantitative toolkit in a policy context where, increasingly, ‘numbers talk’.  Spatial statistics itself may also suffer if geographers are unable to bring their nuanced views of spatial considerations to the table.

When thinking about these issues, I come back to Nelson’s Figure 1, the Haggett view of progress in geography and helical time. It is still an open question whether we are on the cusp of a second Quantitative Revolution in geography, or whether spatial statistics and geographic thought will continue to drift away from each other, with potentially dire consequences.

-FischbobGeo

Why should we think about ontology?

Friday, February 8th, 2013

“It is ironic that ontology is proposed as a mechanism for resolving common semantic frameworks, but a complete understanding and a shared meaning for ontology itself are yet to be achieved”
–Agarwal

Agarwal, a computer scientist, takes on the challenging task of applying the concept of ontology to Geographic Information Science.  After introducing the concept, the author discusses some applications of ontologies in GIScience, stressing that ontologies underpin much of the ‘scientific’ potential of the field.  She then outlines and discusses the considerations and challenges associated with developing and implementing a common ontology for GIScience.

It is clear from the article that the concept of ontology is a complex one, grounded in classical philosophical thought.  It is also apparent that ontologies are of the utmost importance to applications such as artificial intelligence, but why exactly should GIScientists drop what they’re doing and consider ontology?  Agarwal gives some reasons.  First, ontologies require us to come to a consensus on terms in the discipline that may currently be fuzzily or ambiguously defined.  Additionally, ontologies make the underlying assumptions and relationships of a GIS data and/or analysis model more explicit and transparent.  These advantages lend themselves well to the overall goal of data interoperability, the achievement of which will make everyone’s lives easier in the long run.

More fundamentally, Agarwal asserts that “the lack of an adequate underlying theoretical paradigm means that GIScience fails to qualify as a complete scientific discipline.”  Fighting words!  Is the legitimacy of GIScience, or even geography in general, rendered null and void by its lack of a unifying ontology?  Agarwal contends that the conceptual fuzziness and ambiguities of many terms and processes in GIScience and geography, as well as the difficulties associated with the concepts of scale and spatial cognition, not only make ontologies very difficult to develop, but should be considered problematic to the entire geographic discipline.  Problems of scale and other disciplinary ambiguities are typically handled by geographers arbitrarily based on the researcher’s individual research question: do we need a more strictly defined framework?  I would argue that it is not possible to get rid of geography’s messiness; that it is the complex subjectivities of geography that are part of what distinguishes it as a discipline and it is not really possible to separate these subjectivities from GIScience.  If anything, the challenge of applying ontology to GIScience should be a humble reminder that making sweeping and universal conclusions is problematic, but it shouldn’t stop us in our tracks.

-FischbobGeo

No Anselin and Getis (1992) are not out-of-date!

Friday, February 8th, 2013

I disagree with the statement that Anselin and Getis’s article is completely irrelevant today. I think that the point they are making about being careful with the toolbox is even more crucial today. Yes, tremendous steps have been made in terms of tools and methods in spatial stats but the questions behind the computerized processes still need to be well asked. We cannot let ourselves be led by the GIS tools. The authors give great examples of wrong maps being made because of misinterpretation of data. We can do statistics blindly with any data, and we will have results at the end no matter what. However, it’s not guarantee that the results will mean anything. The analyses have to be well supported to give meaning and interpretation to the results. I will end by quoting the authors: ”the technology should be led by theoretical and methodological developments in the field itself.”

S_Ram

The mountain doesn’t just get in the way

Friday, February 8th, 2013

In a largely philosophical discussion of ontology and perceptions of existence, Smith and Mark drive at some of the underlying and fundamental assumptions of cognition and geography. With the framing question “Do mountains exist?” (also the article’s title), the authors tear apart understandings of existence—boundedness, independence, universal acceptance—and conclude that how we approach that simple question lies at the base of how we perceive, and therefore visualize our environment.

This article is a fairly fascinating discussion that lends a psychology, as well as a philosophy, to GIS, a field that is largely empirical and filled with concepts we take for granted. For instance, the authors write, “Maps…rarely if ever show the boundaries of mountains at all…[capturing] an important feature of mountains…namely that they are objects whose boundaries are marked by gradedness of vagueness” (Smith et al. 2002). For something to exist, does it have to be independent, bounded, and universally accepted as such? We know that there is a mountain in a given place, but can we easily demarcate its boundaries? If not, can we truly say that the mountain exists or that it is a feature of the surrounding landscape?

The truth is that in an empirical analysis, i.e, for policy makers, these notions matter immensely, but from a geographic and informal perspective, we can understand the mountain as an object in a larger system. Thus, the mountain can exist, but its exact location does not matter and perhaps should not be of primary concern in a visualization of the landscape.

– JMonterey