Curtin 2007: Network Analysis in GIS

November 11th, 2017

Network analysis is very useful for showing relationships between objects/agents/people and does not require some of the more formal geographic foundations. The result is the formation and growth of informal and natural linkages to create complex systems which can model how things are connected to each other. It essentially provides an alternative to geographic datum for locating points in space through their relationships to other points. A good example are social media networks: the connections that individuals make online forms a global network of information about people and their relationships with each other.

An interesting topic highlighted in this article is the contrast between topological and non-topological data models. This distinction is interesting for me as a geography student since it seems ridiculous to exclude topology when thinking about networks. the paper makes a similar statement by explaining how these models were effectively useless as the are simply points and lines with no substantial information available for analysis. I would have appreciated a bit more explanation fro non-topological data models such as an example of how it may be used and why that might be advantageous over topological models in some uses.

The article makes one particularly large claim: Network GIS is the only sub-discipline to have redefined the spatial reference system on which locations are specified. Im not going to agree or disagree with this statement but I think the paper could’ve done a better job at supporting this argument and contrasting against potential sub-disciplines.

Thoughts on Goodchild (2012)

November 11th, 2017

Goodchild does a thorough job assessing the benefits and hindrances of his three methods for quality assurance of VGI. His first two, the crowd-sourcing approach and the social approach, he evaluates in comparison to Wikipedia contribution. Goodchild failed to specify a few important details of the social approach. Ideally Wikipedia contributions are made by users who have specific knowledge of a subject. User profiles on Wikipedia list a user’s contributions/edits, as well as an optional description of the user’s background and interests (and accolades if they are a frequent or well-regarded contributor). An OSM user profile could similarly denote their [physical] area of expertise, and also register regions where the user has made the most contributions/edits, giving them more “credibility” for other related contributions.

An important aspect that Goodchild failed to mention regarding the crowd-sourcing approach is the barrier to editing OSM features. While Linus’ Law can certainly apply for geographic data, someone who sees an error in OSM would need to be a registered and knowledgeable user to fix the error. In Wikipedia, an “Edit” button is constantly visible and one need not register to make an edit. Legitimate Wikipedia contributions must also be accompanied by a citation of an outside source, an important facet that geographic information often lacks.

The geographic approach to VGI quality assurance requires a set of “rules.” Goodchild is concerned with the ability of these rules to distinguish between a real and imagined landscape, giving an example based on the characteristics of physical features such as coastlines, river systems, and settlement location. Satellite imagery has provided the basis of much of OSM’s physical geographic features. Quality assurance is more often concerned with the name and location of man-made features. A set of rules for man-made features could be more easily determined through a large-scale analysis of similarly tagged features and their relationship to their surroundings. I.e. a restaurant located in a park away from a street might be flagged as “suspicious” since its surroundings do not match the surroundings of other “restaurant” features.

Volunteered Geographic Information and Crowdsourcing Disaster Relief: A Case Study of the Haitian Earthquake, Zook et al. (2010)

November 11th, 2017

This article by Zook et al. (2010) talks about VGI specifically in the context of the 2010 Earthquake in Haiti, but more broadly discusses many of the issues presented in Goodchild and Li (2012) regarding the accuracy and validity VGI. I think Zook et al. (2010) do a good job of considering many aspects of VGI, including issues in data licensing and compatibility, as well as the exclusive nature of VGI which is mostly restricted to people with the technical skills to participate in many cases, and the fact that “there will always be people and communities that are left off the map” (29). While reading that line I wondered, even though VGI is not necessarily accurate, and even though some people will be completely excluded from the VGI for a myriad of reasons (no access to internet or mobile platforms, illiteracy, distance from centres of help, etc…) is it not worth trying? There is a level of error and inaccuracy in any projected geographic information, but that does not stop us from using GISystems.

Moreover, while reading this I thought back to the Johnson, Ricker and Harrison (2017) article I shared with the class, where many of the same issues in accuracy, licensing and intention are presented. I wondered if, despite these unresolved issues, UAVs do not present an opportunity to collect objective, real-time data in instances of disaster mitigation and relief? Because UAVs were used in recent instances of disaster relief, I wonder how the discussion has shifted to include some of the particular issues that arise from their use.

Network analysis in GIS (Curtin, 2007)

November 10th, 2017

I found it very interesting how Curtin (2007) points out that network analysis is the only subfield of GISciences that has redefined a spatial reference system. Linear referencing, or using the network itself as a reference, is so intuitive that I had never thought of it as an alternative method of spatial referencing. I realize that standardized spatial referencing is something that I take for granted and alternative methods may be an interesting direction for future research.

This statement can be readily debated, but in my mind, network analysis is a field within GISciences that perhaps has the most tangible impact on our daily lives, and can be applied to the most diverse types of phenomena. The authors highlight routing as one of the most fundamental operations in network analysis, and I couldn’t imagine our society functioning without it. Routing is particularly relevant in urban areas where efficient movement from point A to point B across complex road systems is essential for the transportation of people and goods.

Shortest path routing may be the most basic implementation, but I am curious to understand how other factors can be incorporated into routing algorithms to enhance efficiency. The authors indicate that “many parameters can be set in order to define more complex versions of shortest path problems”. In urban areas, for example, how are factors such as traffic, road speed limits, and road condition integrated to provide better routing options?

In reading this article, I was reminded of a previous article that we read on spatial social networks (Radil et al., 2009). Both of these articles highlight the interesting role of space in network analysis. Networks are fundamentally spatial due to their graphical basis, but they can also be used to represent explicitly spatial geographic networks.

Curtin (2007)

November 10th, 2017

As with many of the topics covered in class, though I have used network analysis, I never read much background on the subject, because I mostly used it as a tool in various GISystems applications. For instance, I had not ever thought of the origin of the shape file, or some of the positive/negative attributes beyond the fact that I use shape files for some things, and not for others. Once again, this shows the shortcomings of using GIS strictly as a tool, and some of the important background and concepts that are lost when used in this way.

One thing that particularly stood out in this article by Curtin (2007) was the discussion of the Travelling Salesman Problem (TSP), and how solutions are heuristic, and the problem/ abstraction from “true” solutions not properly or completely understood. To me, this links back to the what I feel I am getting out of this course, which is a deeper understanding of the background, importance, and shortcomings of various GIScience concepts which is truly lacking in other GIS courses I have taken. As Curtin (2007) mentions, network analysis is now mostly used in route mapping like MapQuest (once upon a time) and Google Maps, without most people having any background knowledge on how those routes are computed or the algorithms used. This is something that the author touches on briefly, but doesn’t explore fully, and something I feel is very important in the broadening use of GIScience in everyday life.

On Dunn & Newton (1992) and early 1990s Network Analysis

November 9th, 2017

Dunn & Newton’s article “Optimal Routes in GIS and Emergency Planning Applications” (1992) heavily discusses the mathematics behind the Djikstra algorithm & its spin-off, the “out-of-kilter” algorithm, and the “out-of-kilter” algorithm’s use in early ‘90s GISoftware and on early ‘90s computers.

The “out of kilter algorithm” that diverts in multiple paths for an increased flow from one node to another, like an increase in traffic in emergency evacuation. I would have liked some more information from this article on the possible uses of network analysis for everyday people, but I agree this could have been difficult as personal GISystem use did not really exist then like it does today. The network analysis that Dunn & Newton discuss uses set points with available road networks for its running example, but they could have considered a world using network analysis that could rely on (unscrambled, post-2000s) GPS & constant refreshing. They briefly mention that some emergency vehicles have on-board navigation systems, which infers that they had the capability to discuss GPS & network analysis further, but did the inaccuracy of GPS at the time affect the emergency vehicles? Also, without these systems, a user would have to start from a set route and end at a set route and be limited to analyzing within a specific area that 1) their computer could hold and 2) their data was collected on, and on-the-fly adjustments (commonplace now) could not occur without extensive coordination. 

I am looking forward to learning more about current uses & future advancements, especially now that GISoftware isn’t just reserved for highly specialized people as it was in 1992, and that computers are faster (and that cloud computing, (more) accurate GPS, and mobile devices exist)!

On Goodchild & Li (2012) and Validation in VGI

November 9th, 2017

I thought that this article “Assuring the quality of volunteered geographic information” was super interesting. Encompassing the evolution of “uncertainty” in GIScience was interesting, and a welcome addition as a segue into the three approaches into quality assurance (crowd-sourcing, social, and geographic).

Exploring the social approach further, it stipulates that there will always be a hierarchy, even within a seemingly broad/open structure. Goodchild & Li discussed briefly that there is often a small amount of users who input information and a smaller amount of people who verify that information, in addition to the large number of constant users.

For future additions to OSM or other crowd-sourced sites, it would be super interesting to show who’s actually editing/adding, and make that info easily available and present on the screen. Currently in OSM, one can see usernames of the most recent editors of an area, and with some more digging, one can find out all the users that have edited in an area, and with even more digging, one can look at these editors’ bios or frequently mapped places and try to piece together info about them that way. I guess it would be more of a question of privacy (especially in areas where open data isn’t really encouraged, or where there aren’t a lot of editors other than bots, or both), but hopefully this sort of post-positivist change comes. I recently learned that most of OSM’s most active users & validators (worldwide) are white North American males between the ages of 18 and 40, which unfortunately is not unbelievable, and begs further questions about what information is being mapped and what’s being left out. Some info isn’t mapped as the mappers are not interested in this information (for example, what a 25 year old guy would want to see on a map may not even overlap with what a 65 year old woman would want to see on a map. This gets even more tangled when also considering gender, geographic, or ethnic/”race” dimensions). Showing this information, or at least making it less difficult to find or access without lots of time and adequate sleuthing skills, might compel layman users to be more interested in where exactly their information is coming from.

Thoughts on assuring the quality of VGI (Goodchild and Li, 2012)

November 9th, 2017

I think that the most important thing to note from Goodchild and Li’s article on assuring the quality of VGI is that his proposed approaches are only applicable to VGI that is “objective and replicable.” This is to say that he is discussing VGI which attempts to capture the truth of a particular geographic phenomena (such as contributions to OpenStreetMap), rather than VGI which references an individual’s particular experience in geographic space (such as a volunteered review of a tourist location). I don’t intend for this post to devolve into a discussion on the nature of scientific “truth” and “fact”, but it is definitely interesting to think about the extent to which any type of VGI (and any type of geographic fact, I suppose) can truly be objective. All volunteered information is subject to the bias of its contributor.

I would have liked for this article to also address the challenges in defining “accuracy” for VGI that is purely objective, rather than fact-based. When we are talking about things like a restaurant review on Yelp or a woman reporting the location of an incidence of sexual assault, what does “accuracy” mean? A restaurant review might be inaccurate in the sense that it could be fabricated by a reviewer who never actually went there, but this is nearly impossible to identify. Perhaps it is the intent of the contributor that is the most important in examples like this (ie. does the reviewer have malicious intent against the particular restaurant?), but underlying intent is still incredibly opaque. Perhaps this is a topic for further class discussion…

Ester et al 1997 – Spatial data mining

November 5th, 2017

The broad goal of knowledge discovery in databases (KDD) is, fittingly, to construct knowledge from large spatial database systems (SDBS). This goal is achieved via spatial data mining methods (algorithms) which are used to automate KDD tasks (e.g. detection of classes, dependencies, anomalies). Without a fuller understanding of the field at present, it is hard to judge how comprehensive an approach is outlined in Ester et al’s (1997) paper.

The authors underline the distinguishing characteristics of spatial databases; namely, the assumption that an object’s attributes may be influenced by the attributes of its neighbours (Tobler). These assumptions motivate the development of techniques and algorithms which automate the identification and extraction of spatial relationships. For instance, a simple classification task can be executed by algorithms that group objects based on the value of their attributes. The authors present a spatial extension of this approach, by incorporating not only an object’s attributes, but also those of its neighbours, allowing for greater insight into spatially classified sets of objects within a SDBS.

Contrasting with last week’s topic, the approach to knowledge extraction here emphasises automation. The goal is to construct basic rules that can efficiently manipulate and evaluate large datasets to detect meaningful, previously unknown information. Certainly, these techniques have been invaluable for pre-processing, transforming, mining and analysing large databases. In light of recent advances, it would be interesting to revisit these techniques to assess whether new spatial data mining methods are more effective for guessing or learning patterns that may be interpreted as meaningful, and to consider the theoretical limits of these approaches (if they exist).
-slumley

Spatial Data Mining: A Database Approach, Ester et al. (1997)

November 5th, 2017

Ester et al. (1997) propose basic operations used for knowledge discovery in databases (KDD) for spatial database systems. They do so with an emphasis on the utility of neighbourhood graphs and neighbourhood indices for KDD. When the programming language began to bleed into the article it was clear that maybe some of the finer points would be lost on me. I was reminded of the discussion of whether or not it’s critical that every concept in GIScience is accessible to every GIS user. I’m convinced that in order for GIS users to practice critical reflexivity in their use of queries within a database, they ultimately need to understand the fundamentals of the operations they utilize. After making it through the article, I can say that Ester et al. could explain these principles to a broader audience reasonably well. I’ll have to echo the sentiments of previous posts that it would have been interesting to see more discussion of this, but perhaps it’s beyond the scope of this article.

Maybe it’s because we’re now into our 9th week of GIScience discourse, but I felt that the authors did a particularly good job of situating spatial data mining–which, despite its name, might appear more closely related to the field of computer science at a glance–within the realm of GIScience. Tobler’s Law even makes an appearance on page 4! It’s an interesting thought that GIScientists might have more to contribute to computation beyond the handling of explicitly spatial data. For instance, Ester et al. point to spatial concept hierarchies that can be applied to both spatial and non-spatial attributes. You can imagine how spatial association rules conceived by spatial scientists might then lend themselves the handling of non-spatial data as well.

On Ester et al (1997)’s Spatial Data Mining in Databases

November 5th, 2017

In their article “Spatial Data Mining: A Database Approach” (1997), Ester et al outlined the possibility of knowledge discovery in databases (KDD) using spatial databases, utilizing four algorithms (spatial association, clustering, trends, and classification). Unfortunately, the algorithms are not entirely connected to how one mines spatial information from databases, and the algorithms introduced don’t seem incredibly groundbreaking 20 years later. This paper seemed very dated, particularly because I feel like most of these algorithms are now tools in ESRI’s ArcGIS and the frameworks behind GeoDa, and because the processing issues that seemed to plague the researchers in the late 1990s are not issues (on the same scale) today.

Also, I found it strange that the paper adopted an incredibly positivist approach, and did not mention anything about how these tools could be applied in real life. They acknowledged this as a point of further research in the conclusion, but weighted it less heavily than the importance of speeding up processing times in ‘90s computing. In their introduction, the authors discuss their rationale for using nodes, edges, and quantifying relationships using Central Place Theory (CPT). However, they do not mention that CPT/theorizing the world as nodes & edges is an incredibly detached idea that 1) cannot describe all places, 2) does not realize that human behaviour is inherently messy and not easily predictable by mathematical models, and 3) only identifies trends and cannot be used to actually explain things, just to identify deviances from the mathematical model. Therefore, not everything can be identified by a relationship that a researcher specifies to scrape data using an inherently flawed model, and therefore there will be inaccuracies. It will be interesting to learn if/how spatial data miners have adapted to this and (hopefully) humanized these processes since 1997.

Data base approach to spatial data mining (Ester and al.)

November 5th, 2017

Spatial data mining consists of the use of database information and manipulation through algorithms to process spatial information as effectively as possible. it is able to use available information to infer other pieces of information through dependancy between variables. Thus, it can relate to aspects of spatial privacy by using personal information voluntarily provided to determine additional information about people that might otherwise not be divulged (or areas for the use of this paper).

To be upfront, spatial data mining is a topic that I was rather intimidated to look into. Since I have a basic understanding of computer science and was confused by the majority of the more technical information presented in the paper. However, I thought the paper did a good job at conveying how the concepts are used and why they are applied; I understood the logic behind the algorithms and how information is mined. effectively, i believe that the paper caters to a wide audience thanks to its combination of technical and conceptual information.

The article explicitly covers the basics of spatial data mining; basic operations and concepts used in the area of study. This raises the question “what are the complex and advances methods of spatial data mining?”. If this paper was written in 1997, the field has probably made considerable advances since and what new methods might be on the horizon. However for the purpose of this article, the basics were very well introduced to provide a range of readers to learn about the field of spatial datelining though knowledge discoveries in databases.

Shekhar et al – Spatial Data Mining

November 5th, 2017

This paper presented the primary tools in which to affect data mining on a set of data. The tangible results found as a result of data mining were not new to me, I believe it is something that many budding GI scientists engage with at the beginning of their education. I remember engaging with learning and training data from other classes, typically in the form of geolocating.
I found that the hidden data sets emerging from these analyses poses a very interesting insight into our epistemology of data sets. With learning and training data it seems that we’re engaging with a very basic form of machine learning. I am intrigued by the opportunities this faces with a more open form of data. I can imagine that with more open data sources, the machine learning aspects could learn from other data sets and gain more insight within hidden data. I wonder if our treatment of data and rights will come into discussion in the future. I’d be interested in know in what forums these conversations are taking place.
As a whole, all of these techniques seem to provide a very valuable tool. To extrapolate meaning from disparate forms of data, such as by clustering, determining outliers and figuring out co-locational rules can be an extremely insightful tool for a lot of disciplines in the social and physical sciences. Taking a rudimentarily psychological lens, I find it interesting how much of these techniques assume a behaviouralist understanding of spatial processes, in which they interact in rational ways with each other as part of a greater whole. The fact that they take interest in outliers seems to factor in the irrationality of some processes. I would also be interested in knowing where the research on that is headed.

Spatial Data Mining (Shekhar et al)

November 4th, 2017

I found this paper particularly tough to get into, as Spatial Data Mining veers more towards a tool used in G.I.S. than any of the topics we have covered thus far in my opinion. Although the tweaking of methods like SAR and MRF models to meet the issues regular data mining ran into (i.e. ignoring spatial auto-correlation, and inferring spatial heterogeneity) is a sign of tool building, I still find this topic in GIScience to be very technical and definitely in the tool realm of G.I.S. Furthermore, many of the clustering techniques mentioned (i.e. K-means) have been around for years now, and have been accepted as the standard in most regular G.I.S. projects, making me ask the question “what makes spatial data mining so special?”. Is it simply the size of the data being mined, and the unsupervised aspect of it? As this paper cites papers from 1999 & 2000 on spatial data mining’s ability to work with large amounts of data back then, I wonder how well spatial data mining works with big data, and how the validation process and statistical analysis of this would work today.

Although this paper focuses on the uses of spatial data mining and the raster dataset, I wondered that if this technique were used to go over vector data possibly including personal information (i.e. age or phone number) and tied this to space to look for ‘hidden patterns’, this would definitely be a violation of privacy.

All in all, although this field seems quite complex, it also seems very simple in that it embodies all of the basic algorithms used in traditional GIS projects, though on a larger scale.

-MercatorGator

 

Thoughts on Shakhar et al. (2003)

November 4th, 2017

Shekhar et al. (2003) outline various techniques in spatial data mining which can be used to extract patterns from spatial datasets. In discussing techniques for modeling spatial dependency, detecting spatial outliers, identifying spatial colocation, and determining spatial clustering, Shakhar et al. effectively demonstrate the relevant challenges and considerations when working with a spatial dataset.  Due to factors such as spatial dependency, and spatial heterogeneity, “general purpose” data mining techniques will perform poorly on spatial datasets and new algorithms must be considered (Shekhar et al., 2003).

Shekhar et al. define a spatial outlier as a “spatially referenced object whose non-spatial attribute values differ significantly from those of other spatially referenced objects in its spatial neighbourhood” (p 8). I have not previously been exposed to research on spatial outliers, but I was surprised to read such a definition in which an outlier is determined by its non-spatial attribute. I am left wondering if it is possible to invert Shekhar’s definition and define spatial outliers in terms of differences in spatial attribute values among objects with consistent non-spatial attribute values. For example, when talking about the locations of bird nests, could we define a spatial outlier as a nest which is significantly far from a cluster of other nests?

As this article was broadly speaking about knowledge discovery from spatial datasets, I was reminded of last week’s lecture on geovisualization. While the objective approach of spatial data mining contrasts the exploratory geovisualization process, I am curious how the two approaches can effectively be combined to drive a more holistic process of knowledge discovery from spatial data.

Spatial Data Mining – Ester, Kriegel, Sander (1997)

November 3rd, 2017

Tobler’s Law of Geography is central to spatial data mining. The purpose of knowledge discovery in databases is to identify clusters of similar attributes and find links with the distribution of other attributes in the same areas. Using decision tree algorithms, spatial data systems and their associated neighborhood graphs can be classified, and rules can be concluded from the results. The four generic tasks introduced in the beginning of the article are not addressed later on. Identifying deviation from an expected pattern is presented as central to KDD as well, but an algorithm for this doesn’t appear to be discussed.

The article remains strictly concentrated on the implications of KDD algorithms on spatial database systems and computer systems. Little relation is made to non-spatial database systems, even though many of the algorithms presented are based on non-spatial decision-tree algorithms.

I’m sure that patterns can be detected in human attributes of nodes in a social network. Since distance along an edge is so crucial to spatial classification, do non-physical edges quantified in other ways perform similarly in the creation of human “neighborhoods”? When patterns are deviated from, can conclusions be drawn as easily about social networks?

“Neighborhood indices” are important sources of knowledge that can drastically reduce the time of a database query. Creating spatial indices requires some knowledge of a spatial hierarchy. Spatial hierarchies are clear-cut in political representations of geography. As pointed out in the article, often the influence of centers (i.e. cities) is not restricted to political demarcations. These algorithmically created neighborhood indices may present interesting results to urban planners and geographers, who often have difficulty delineating the extent of influence of cities. beyond their municipal borders.

 

Spatial Data Mining: Shekhar, Zhang, Huang and Vatsavai (2003)

November 3rd, 2017

The article by Shekhar, Zhang, Huang and Vatsavai (2003) begins with a clear explanation of the differences between spatial and non-spatial data mining, with some interesting examples. It would have been useful to include some of the information from last week’s article from geoviz about the prevalence of spatial information in digital data (~80%) for context, especially given the link between geoviz and data mining made at the end of the article. The article then goes on to list different statistical phenomena and methods, with clear examples which was helpful for context and keeping the text engaging.

The section I found most interesting, and which I think Allen will focus on during his research is clustering. One thing that was not mentioned in the article and which I wonder about, is the role of scale in spatial clustering, especially with large data sets. If you’re looking for spatial clusters, won’t scale play a big role in determining the clusters, ie. something might seem like a small cluster, but at a smaller scale, it is part of an even larger cluster. Using Allen’s research project of taxi ridership in NYC as an example, I would imagine that certain areas of Manhattan will have high instances of taxi ridership, but at a smaller scale,  Manhattan as a whole would be an area of taxi ridership clustering. I wonder how the choices of scale and data granularity in analysis lead to different results, and whether it is useful to run analysis at different spatial scales.

 

Thoughts on Spatial Data Mining Chapter (Shekhar et al.)

November 2nd, 2017

This chapter provided a review of several spatial data mining techniques, example datasets, and how equations can be adapted to deal specifically with spatial information. In the very beginning, the authors state that to address the uniqueness of spatial data, researchers would have to “create new algorithms or adapt existing ones.” Immediately, I thought about how these algorithms would be adapted; would the inputs be standardized to meet the pre-conditions of non-spatial statistics? Or would the equations themselves be adapted by adding new variables to account for differences in spatial data? The authors address these questions later in their explication of the different parts of the Logistic Spatial Autoregressive Model(SAR). 

When discussing location prediction, the authors state that “Crime analysis, cellular networks, and natural disasters such as fires, floods, droughts, vegetation diseases, and earthquakes are all examples of problems which require location prediction.” (Shankar et al. 5/23) Given the heterogeneity and diversity in these various data inputs, I was wondering how any level of standardization is achieved in SDA, and how interoperability is achieved when performing the same operations on such different data types. 

What I gathered from this chapter was that there is considerable nuance and specificity within each SDM technique. Given the diversity of applications for each technique, from species growth analysis to land use change, to urban transportation data, the choice of attribute that is included in the model greatly influences the subsequent precision of any observed correlation. (See example of Vegetation Durability over Vegetation Species for Location Prediction example) 

There was a clear link between SDM and data visualization, as illustrated by the following statement about visualizing outliers; “ there is a research need for effective presentations to facilitate the visualization of spatial relationships while highlighting spatial outliers.” Clearly, there is overlap between accurate spatial models and the  effective presentation of that data for the intended audience. 

-FutureSpock

Cognitive and Usability Issues in Geovisualization, Slocum et al. (2001)

October 29th, 2017

Solcum et al. (2001) detailed emergent research themes in geovisualization circa 2001. The authors advocate for an interdisciplinary approach incorporating cognitive and usability engineering principles to address challenges concerning immersion and collaborative visualization. It was striking to realize how frequently I’ve brushed over the finer points made by the authors over the year and change I’ve spent submitting GIS assignments.I feel that so many without technical GIS training are inclined to conceptualize the discipline as “mapmaking.” In contrast it’s interesting how little time is spent on more nuanced cartographic considerations in introductory courses. The article made for a good introduction for engaging more meaningfully with what’s quite literally right under my nose.

Even though the article was presumably written before the release of Google Earth (B.G.E.?) it would appear that most of their discussion concerning emergent research themes is relatively robust–even if perhaps some of their associated challenges have since been addressed. For instance, I am not sure of what more could be said about maintaining orientation in explicitly geographic visual environments, but I would interested to learn more about how one would handle orientation in alternative spatial environments. Particularly such that would be immersive enough that would enable the type of cognition that we use in handling the real world. Moreover, I wonder how the ubiquity of Google Earth alone has propelled the topic of cognition and usability in geovisualization.

Cognitive and Usability Issues in Geovisualization (Slocum et al., 2001)  

October 29th, 2017

This paper discusses the challenges of using novel geovisulization methods (methods based on advanced software and hardware) and emphasize the significance to conduct cognitive research and usability evaluation for higher effectiveness of these methods. I may agree that it is important to explore how to develop and apply the geovisulization methods “correctly”. Main reasons are, first, geovisualization can be widely applied in different fields having varied requirements; second, the old cognitive framework of geovisualization methods is not suitable to guide new techs (i.e. novel methods). When new techs coming, they bring both new demands and issues. People may want geovisualization to achieve more, for example, we achieve ubiquitous monitoring of the environment through geovisualize the data from the popularity of mobile devices or censors. While, people also concerns the issues of surveillance and privacy. Therefore, it is necessary to do research for guiding the geovisualizaiton method developments and applications.

However, I am not quite convinced by the arguments of using the usability engineering methods to evaluate the effectiveness of geovisualization methods. First, I didn’t see a good definition or explanation of effectiveness in this paper. Effectiveness may be varied when applying geovisulization methods in different cases, but I still believe the authors should have a general and clear definition to state what effectiveness is with respect to gevisualization. Or even they can just clearly say effectiveness of gevisualization methods is the same to that of other software. Second, I think the authors can be more straightforward about saying the essence of adopting concepts from usability engineering, which is geovisulizaiton methods should be highly user-centered. According to the authors, we should highly consider the user needs and iteratively improve the methods instead of developing them first and testing in the end. This clarification may make readers less confused about why we need usability engineering here.

Following the discussion, I believe it is better to have further investigation on how to practically adopt usability engineering methods in geovisualization. We may need to distinguish the geovisualization tool from general software and customize a development life cycle for it. Besides, since this paper is published in 2001, which is 16 years ago, it is good to ask whether the concepts promoted in this paper are still valid in term of emerging “novel methods”.