Archive for the ‘506’ Category

On Sarkar et al. (2014) and movement data

Sunday, December 3rd, 2017

I thought Sarkar et al.’s “Analyzing Animal Movement Characteristics From Location Data” (2014) was super interesting, as I don’t have a very strong background in environment and I didn’t know about all of the statistical methods involved in understanding migratory patterns via GPS tracking. The visualizations were super interesting, like the Rose diagram to show directionality and the Periodica method to then determine hotspots. I also appreciated the macroscopic viewpoint of this article; though the inclusion of equations is important for replication and critical understanding, it is also important to discuss the outputs and limitations of the equations at a larger level, in order to better understand results. It is especially useful for those without deep math backgrounds, like myself, to understand the intentions of using these certain equations without having the math background of being able to visualize output.

As interesting as it was to learn about the incredible utility of understanding migratory patterns of animals, I couldn’t help but think about applications to human geography. I wonder if these patterns are already being used to extrapolate information on someone based on their location, as the Toch et al. (2010) paper on privacy for this week hinted at. If they are different, it would be interesting to evaluate the utility of the methods to study human spatial data patterns as applied to animal migratory patterns. Since Sarkar et al. used unsupervised classification to learn new patterns, it seems like the combination of methods used could apply (and probably do apply) to human spatial data mining. As terrifying as that is.

On Toch et al. (2010) and location-sharing

Sunday, December 3rd, 2017

This article was super interesting, as I didn’t know too much about the actual mechanics behind location sharing (ie. “Creating systems that enable users to control their privacy
in location sharing is challenging” (p. 129)). Their ideas of identifying privacy preferences based on the locations that people go to was confusing (and was not really ameliorated by the end). Perhaps it’s because I don’t understand Loccacino, particularly because of technology constraints from 7 years ago (did they or could they collect data then? All the time, or just when you wanted to share your location like “At the mall”?), underlined by the wonderful image of the “smartphone” (p. 131). Like some of my classmates noted, this seemed very similar to Find My Friends, and perhaps that’s why I didn’t understand how this worked, what the line was between actively volunteering and passively volunteering location.
Further, I had some issues with the participant pool that they used. The researchers relied on a set that was 22/28 male and 25/28 student and then were surprised that “the study revealed distinct differences between the participants, even though the population was homogenous”. As evident from spatiotemporal GIS & feminist GIS, women interact with spaces differently than men. Further, age of participants, as another classmate noted, is crucial: a 50-year-old staff member or student will go different places than a 22-year-old student. Not to mention, analysis of age could determine why there was a big difference in sharing (or if there was not a difference). Also, I was interested in seeing the differences between people between mediums, as some people used phones and some used laptops, and phones are way easier to pull out and share info on than laptops, especially in social gatherings or public spaces. They acknowledged this difference as being 9 mobile & 5 laptop users being “highly visible” (p. 135), but I would be more interested in seeing the differences between the two mediums first and seeing activity levels as a whole for the two mediums, rather than continuing to equate the two, especially since laptops and phones were not distributed equally among participants. I think this study would be interesting to redo today, but with more information about participants and more controls throughout the study (or at least, fixing for differences among participants and modes of participation).

Location Privacy and Location-Aware Computing, Duckham and Kulik (2006)

Saturday, December 2nd, 2017

Duckham and Kulik (2006) introduce the importance of privacy in location-aware computing, and present emergent themes in the proposed solutions to related concerns. In their section contextualizing privacy research, the authors present privacy and transparency as opposing virtues (p. 3). I’m curious about the distinction that would motivate the valuation of one over the other. For instance, many would feel uncomfortable with the details of their personal finances being public (myself included), but would advocate for the openness of business or government finances, or even those of the super-rich. Is power the distinguishing characteristic? Perhaps concerns for person wellbeing or intrusive inferences are less applicable to large organizations, but how do we explain the public response to the Panama or Paradise Papers?

Duckham and Kulik (2006) also posit that greater familiarity and ubiquity of cheap, reliable location-aware technologies will increase public concern for privacy (p. 4). I’m not so convinced–in fact, is it not the opposite? It would seem that during their inception concern for privacy was much higher than it is now. I would argue the pervasiveness of location-aware technologies has generated a reasonable level of comfort with the idea that personal information is always being collected. I would imagine this is evident in the differential use of location-aware technologies in people that have grown up with them.

I appreciated the authors’ discussion of location privacy protection strategies. They provided interesting critique of regulatory, privacy, anonymity, and obfuscation approaches. I would add to the critique of regulatory or policy frameworks based on “consent” that participation in such technologies is becoming less and less optional. Even when participation is completely optional, consent is often ill-informed. It’s clear that the question of privacy in location-aware computing is one with no clear answer.

Animal Movement Characteristics from Location Data, Sarkar et al. (2014)

Saturday, December 2nd, 2017

Sarkar et al. (2014) present an analytical framework for making inferences about animal movement patterns from locational information. The article was an insightful show-don’t-tell introduction to how movement research could be applied beyond the domain of GIScience. Also, I think this may be one of the first articles we’ve looked at with an explicitly ecological application of GIScience research… A welcome addition!

It’s becoming increasingly evident how these GIScience topics we’ve discussed in class interact to provide a better understanding of how geospatial information is analyzed and represented. I appreciated the authors’ discussion of uncertainty in the Li et al. (2010) algorithm for detecting periodicity. I found myself tempted again to assume that increasing temporal resolution is the best way to minimize this sort of uncertainty. Even withholding concerns for feasibility, ultimately I’m not convinced that this really does more than mask the problem. The detection of periodicity through cluster analysis resembles aggregation techniques for reducing the influence of outliers on uncertainty in the resulting periods, but I am still a little unclear on how the temporality of the location data was incorporated into the clusters. Does the Fourier analysis account for points near in space but distance in time? Perhaps the assumption of linearity is enough in the assessment of migration patterns.

The distinction between directionality and periodicity as components of movement was insightful. Typically I would think about the significance of movement as it relates to the physical space, but Sarkar et al. demonstrate how inferences from orientation and temporality of movement can be insightful on their own.

Thoughts on “How fast is a cow? Cross-Scale Analysis of Movement Data”

Friday, December 1st, 2017

It was interesting to see a direct link to Scale (the presentation I gave this week) in the very first paragraph of this paper. It just reaffirmed how scale is a central tenet of many different sub-fields of GIS, from uncertainty to VGI to movement data in this particular case.

The authors enumerate the many different factors which influence the collection of movement data, from sampling method to measurement of distance (euclidian vs. network) and the nature of the space being traversed. One of the concerns they highlight is “sinful simulation” and this reminds me of our discussions of abstraction pertaining to algorithms, agent based modelling, and spatial data mining. For all these methods, the information lost in order to model behaviour or trends is always a concern and I wonder what steps are taken to address the loss of spatial or other crucial dimensions for movement data.

Another common theme discussed by the authors is the issue of relativity and absoluteness. In their decision to focus on temporal scale, they reiterate that as with slope, “there is no true speed at a given timestamp” (403) because this is dependent on the speed at adjacent points, and is relative. But they say that the speed is dependent on the scale at which it it measured and this confused me because whether they measured it in cm/s or inch/minute, is it the unit which they are using to speak about granularity? Because if so, then regardless of the unit of measurement the speed should be the same. I wonder what they mean by there is no absolute speed at a given timestamp if they are referring to it in terms of a scale issue and not a relative measurement/sampling issue.

The authors contend that the nascency of the field of movement data analysis means that researchers rarely question the choice of a particular temporal scale or parameter definition, and this is definitely an important issue as we have seen with the illustration of the MAUP and gerrymandering. The fact that all these subfields are subsumed within the umbrella of GIS, and that researchers tend to have some “horizontal” knowledge about how methods have been developed and critiqued in other fields, hopefully means that they can adopt the same critical attitude and lessons learned from the past towards this new domain of research.

-futureSpock

Thoughts on “Empirical Models of Privacy in Location Sharing”

Thursday, November 30th, 2017

I am really interested in ubiquitous computing and location-based technologies so I was looking forward to this paper. In describing their methodology and specifically the concept of “location entropy”, I would have liked a more operational definition of “diversity” of people visiting that space- whether they took into consideration economic, social, ethnic, gender differences and how they qualified those variables. There is an interesting link to spatio-temporal GIS in the observation that more complex privacy preferences are usually linked to a specific time window at a given premises (ie. 9-5 on weekdays on company premises) (pg 130.)

I thought it was a novel approach to focus on the attributes of the locations at which people were sharing their locations rather than the personal characteristics of the individuals which might influence their decision to share their location at one point or another. This inverse format lends itself to generalization across subjects and the formation of universal principles about which kinds of places most inspire location-sharing.

There is an emphasis in the paper on “requests” and the explicit invitation to share one’s location in a social network, but the majority of users supply their location unwittingly or without a formal request. Although this is an important difference, it stands to reason that the authors’ observations about the nature of the request (ie. what app is using the info.) or the context (who the information is broadcast to, whether a network of acquaintances or anonymous gamers), influences an individual’s decision to share their location even in the absence of a formal request.

The Locaccino interface (brilliant branding there) looks very much like Find Friends, an app that I know some of my friends use regularly. It’s great in some ways that we are able to empirically test hypotheses about the kinds of environments and behavioural conditions which promote or discourage location sharing using these real-world datasets.

-FutureSpock

 

 

A Framework for Temporal Geographic Information, Langran and Chrisman (1998)

Monday, November 27th, 2017

Langran and Chrisman (1998) discuss the antecedents of temporal GIS, its core concepts, and a number of ways in which temporal geographic information is conceptualized. The map/state analogy was helpful for my understanding of the spatial and temporal parallels. I suppose the stage concept of time is fairly intuitive, but I appreciated its connection to maps explained explicitly. The authors seem comfortable with the convention of representing spatial boundaries as distinct lines, but I can imagine how similar concerns for vagueness and ambiguity might arise in temporal data as well.

The authors did a good job of presenting the advantage and limitations of geographic temporality concepts. At the beginning they mentioned how the “strong allegiance of digital maps to their analog roots” was inadequate for spatiotemporal analysis, but I’ll admit that I didn’t think that the two concepts they presented really subverted this allegiance very much. Still, maybe I’m spoiled by the ways  people are re-imagining maps on the geoweb–an unfair comparison for a 1998 paper.

It was interesting to get a glimpse of historical temporal GIS research. It’s clear that one of the biggest concerns in the implementation of a temporal GIS framework is temporal resolution. If I could hazard a guess, I would think that such concerns might evolve from interpolating between temporally distance information into the question of handling large amounts of data collected in rapid succession. With the advent of big data, namely by way of social media, I can imagine how the application of temporal GIS has and will proliferate since the time the article was published.

Thoughts on Geovisualization of Human Activity… (Kwan 2004)

Sunday, November 26th, 2017

The immediate discussion of the historical antecedents for temporal GIS by Swedish geographers uses the 24-hour day as a “sequence of temporal events” but I wonder why this unit of measurement was chosen as opposed to 48-hours or a week to illustrate the periodicity of temporal events, which may not be captured at the daily scale. It is interesting to note the gendered differences that are made visible by studies of women’s and mens spatio-temporal activities. As the authors note, “This perspective has been particularly fruitful for understanding women’s everyday lives because it helps to identify the restrictive effect of space-time constraints on their activity choice….” I am curious about how much additional data researchers must collect to formulate hypotheses about why women follow certain paths to work or are typically present at certain locations at certain times. I am also curious about how this process is different when trying to explain the spatiotemporal patterns observed in men’s travel behaviour.

One of the primary challenges identified by the authors is the lack of fine-grain individual data relating to peoples’ mobility in urban environments, such as in transportation systems or their daily commutes. This paper was written in 2004 and now, with the rapid increase in streaming, GPS from mobile devices, and open big data sets for most large cities, this is less of a concern. The big challenge these days is probably in parsing the sheer quantity of data with appropriate tools and hypotheses to identify key trends and gain usable insights about resident’s travel behaviour.

The methodology used by the researchers for their study of Portland relied on self-reported behaviour in the form of  a two-day travel study. There are many reasons why the reported data might be unreliable or unusable, especially given the fallibility  of time estimation and tendency to under or over report travel times based on mode of transport, mood, memory of the event, etc. That being said, this is probably the most ethical mode of data collection and asks for explicit consent. I would be interested to know how the researchers cross referenced the survey data with their information about the Portland Metropolitan Region, as well as the structure of the survey.

-FutureSpock

 

 

Goodchild and Proctor (1997) – Scale in digital geography

Sunday, November 26th, 2017

As might be expected, Goodchild and Proctor provide an insightful and lucid evaluation of how conceptions of scale should translate from paper to digital maps, and their analysis remains pertinent in the face of two decades of rapid digital cartographic development. They argue that the representative fraction, as traditionally used by cartographers to represent scale, is outdated for use in digital platforms.

Firstly, I think the representative fraction struggles on a simpler level. In absolute terms, we’d probably find it hard to distinguish 250,000 from 2,500,000, so maybe the large numbers involved with representative fractions would be less preferable to those present in alternatives, such as graphical scales, which visually show the relationship between distances on the map and the real world (as used in Google Maps).

It is interesting to revisit the problems outlined in the paper that have been faced by web map makers. A significant advance in the navigation of scale in digital environments has been in the development of tiled web maps. By replacing a single map image with a set of constituent raster or vector ‘tiles’ loaded by zooming and panning through a user interface, this method facilitates levels of detail that vary with zoom level and position in the map. The appearance and disappearance of certain features (e.g. country names vs town names) has formed another metaphor for scale recognition.

I’m still finding it hard to reconcile the idea of scale as used in everyday language (to represent the range of spatial extents that a phenomena operate within) with its scientific/ GISc definition (as a broader metric for the level of geographic detail, as well as extent). Positional accuracy, resolution, granularity etc are fundamentally important across disciplines, but do they correlate with what people think of when they talk about scale? (sorry Jin)
-slumley

Scale Issues in Social and Natural Sciences, Marceau (1999)

Thursday, November 23rd, 2017

Marceau (1999) describes the significance of and solutions to the issue of scale as it relates to social and natural sciences. The articulation of fundamental principles was helpful in demonstrating the importance of scale as a central question in GIS. It’s clear that the question is particularly important now as we continue to develop a more nuanced appreciation for how observed trends might vary across different scales of analysis.

The discussion of domain of scale and scale threshold stood out to me. I can imagine how differences in the patterns observed between scales would be helpful for organization and analysis. I’m curious about how these observed thresholds would manifest in reality. Are they distinct? Vagueness in our conceptualization of geographic features and phenomena seems to be so prevalent throughout the built and natural environment. I would think that these concepts would somehow shape our analysis of scale in some way that would favour vagueness in the spatial scale continuum. Still, it’s conceivable that sharp transitions could be revealed through the process of scaling unrelated to any vague spatial concepts. An example might’ve made the existence of scale thresholds more obvious to me.

It was an interesting point that an understanding of the implications of the Modifiable Areal Unit Problem took notably longer to develop in the natural science community–perhaps because GIScience as we it now was only in it’s infancy? In any case, it’s another reminder of how significantly spatial concepts can differ between geographies.

On Marceau (1999) and “The Scale Issue”

Thursday, November 23rd, 2017

I really liked how in depth this article went, reviewing development of studies on scale that were outside of the author’s department/field of study. It really emphasizes that this is an issue that applies to both physical & human geography (and others who study geographic space), so it’s cool to see interdisciplinary efforts towards this. I think this article really could have benefited from a visual flowchart or something, just sketching out how these actions would actually work, since it would take me some time to think out how this would all actually work on a raster grid or with polygons or something. Also, I think this article provided some framework for how to consider scale in a research project, like by performing sensitivity analysis (p.7).

In 1999, when this was published, we didn’t have the geoweb, and I think it would be super interesting to learn about how scale issues have been solved/exacerbated by these new developments. Are there issues in this work that have actually been “solved” by the geoweb, or are there just an onslaught of new issues created (as well as the holdovers, like the ubiquitous MAUP)? Writing this blog post, I realize my work has been constantly plagued by issues of scale and yet it’s never required to be acknowledged in handing in an assignment (and therefore I have never really considered it in this depth/variety before). This is something I have to consider in my analysis of methods for my research project, so thank you (and interested in learning more on Monday)!

On Kwan & Lee (2004) and the 3D visualization of space-time activity

Wednesday, November 22nd, 2017

This article was super interesting, as I find the topic of temporal GIS something that’s increasingly pressing in this day and age (and still challenging from the early 2000s).

The visualizations were really interesting, and it seems like they provided way more information faster than just analyzing the 2D movement (no time) would provide. Also, I thought it was incredible that the space-time aquarium (discussed as a prism based on the paths identified by Swedish sociologists) was only conceptualized (or written down, I guess) in 1970 and then realized in the late 1990s with GIS (and also better graphical interfaces of computers).

I thought it was interesting that Kwan & Lee mentioned that this was specifically used for vector data, so it would be interesting to find out more about the limitations of raster data (or perhaps, advances in temporal raster data analysis since 2004?) and the interoperability of raster and vector data. Further, the inclusion and acknowledgement of the lack of qualitative data was appreciated as well, as it provided a bit of a benchmark in the critical GIS history of the issues of qualitative data in something so quantitative. It seems like maybe this could have changed (or have become easier to visualize) in the last 13 years, so I’m looking forward to learning more about this. It would be cool to use this “aquarium” idea to click on individual lines and read a story/oral map of this person’s day, although that raises serious security concerns as the information (likely) describes day-to-day activities even if their name is not included publicly. Further, does the introduction of VR change this temporal GIS model? It would be super bizarre and super creepy (albeit more humanizing, maybe?) to do a VR walkthrough of somebody’s everyday life (although, we probably could get there with all the geo-info collected on us all the time with social media/smartphones!).

Schuurman (2006) – Critical GIS

Monday, November 20th, 2017

Schuurman discusses the shifting presence of Critical GIS in Geographic Information Science (GISc) and its evolving role in the development of the field. Among other obstacles, Schuurman identifies formalisation—the process by which concepts are translated into forms that are readable in a digital environment—as a key challenge to critical theoretical work gaining further traction in GISc.  

Critical GIS challenges the idea that information about a spatial object, system or process can be made ‘knowable’ in an objective sense; our epistemological lense always filters our view, and there is not necessarily a singular objective truth to be uncovered. Schuurman argues that this type of analysis, applied to GIS, has been provided to some extent by ontological GISc research. Contrastingly, this body of research presumes a limit to the understanding of a system, emphasising plurality and individuality of experience (e.g. the multiple perspectives represented in PPGIS research).

That said, previous analyses have fallen short in adequately acknowledging and addressing power relations, demographic inequalities, social control and marginalisation as part of the general design process in GIS. In particular, the translation between cognitive and database representations of reality requires explicit treatment in following research. These observations become increasingly relevant in the context of the rising integration of digital technologies in everyday life.

The paper raises the question of how Critical GIS can affect change on discipline and practice. Going beyond external criticism, critiques must reason within the discipline itself. I would ask how Critical GIS might also gain greater traction outside of academic settings (e.g. in influencing industrial practice of GISc)?
-slumley

MacEachren et al (2005) – Visualising uncertainty

Monday, November 20th, 2017

MacEachren et al evaluate a broad set of efforts made to conceptualise and convey uncertainty in geospatial information. Many real world decisions are made on the basis of information which contains some degree of uncertainty, and to compound the matter, there are often multiple aspects of uncertainty that need to be factored into analysis. The balance between effectively conveying this complexity and overloading analysts with visual stimuli can support or detract from decision making, and constitutes a key persisting challenge explored in this paper.

A central discussion that I found interesting was that surrounding visual representations of uncertainty. Early researchers in the field strove to develop or unearth intuitive metaphors for visualision. Aids such as ‘fuzziness’ and colour intensity could act to convey varying degrees of uncertainty present in a dataset, almost as an additional variable. In the context of our other topic this week, we could ask who these metaphors are designed to assist, and how the choice of metaphor could influence potential interpretations (e.g. for visual constructs like fuzziness and transparency, do different individuals perceive the same gradient scale?).

The authors draw on judgement and decision making literatures to distinguish expert decision makers who adjust their beliefs according to statistical analysese of mathematically (or otherwise) defined uncertainties, from non-experts, who often misinterpret probabilities and rely on heuristics to make judgements. It might have been worth clarifying what was meant by experts in this instance (individuals knowledgeable about a field, or about probability and decision making?). The Tversky and Kahneman (1974) paper cited actually found that often experts (per their own definition) are similarly susceptible to probabilistic reasoning errors, so this polarity may be less distinct than suggested. Like some of the other papers in the geovisualisation literature, I found there was a degree of vagueness in who the visualisation was for (is it the ‘analysts’ mentioned in the introduction, or the lay-people cited in examples?).
-slumley

Critical GIS – Schuurman 2006

Sunday, November 19th, 2017

Critical GIS allows us as users of GIS to better understand how it works and relates to the world around us; how theory’s  are manifested in space, how knowledge is coded, how easy it is to skim over things, and what i think is most important is the validity and praise that we give to our glorious GIS. I think that this concept is something that I wish that I had better understood when I started using GIS programs. We are taught in class and labs about how we can use the software to perform all sorts of tasks for us but we didn’t comment much on the actual foundations of which the software is built upon.

The article highlights how by critically evaluating the foundation of our concepts and techniques applied we can better apply value to the results that arise from our projects. It reminds me of the expression ‘Garbage in – Garbage out’; it doesn’t matter how well you perform a project within the software if the software itself is flawed and you fail to realize that.

I think that one of the issues thats hard to touch on is thinking about how we can improve many of these foundational concepts and apply that in a useful way in the formalized knowledge. Improving on core concepts is only the first hurdle, the second involves finding new ways to code this information in a manner that better expresses what may be minuscule changes to the definitions and ideas. This may be especially challenging since writing code can lead to generalized functions and abstraction.

Critical GIS and Ontology Research, Schuurman (2006)

Saturday, November 18th, 2017

The Schuurman (2006) article presents the emergence of critical GIS, criticisms of early GIS research that necessitated its conception, and its importance to the discipline of GIScience more broadly. It was interesting to get a glimpse of how critical GIS relates to a number of GIScience topics we’ve already begun to cover, and I think the summary of emergent themes in critical GIS provided and excellent primer for next week’s lecture.

There’s a parallel to be drawn between the synthesis of human geography and geographic techniques to form critical GIScience and the emergence of environmental studies as an integration of environmental and social science principles. For instance, the domain of ecology alone is ill-equipped to handle conservation issues related to resource management. It’s the introduction of sociological principles that enables critique of an antiquated form of environmentalism that might value biodiversity over livelihoods. I’m convinced of the importance of critical theoretical work in supplementing a mechanistic approach to geography.

I was glad to see the topic of vagueness make an appearance! I think the author’s discussion of uncertain conceptual spaces does well to demonstrate the importance of human geography concepts to what Sparke (2000) might refer to as “real-worlders.” It’s sometimes easy to forget how poorly defined some physical geographic concepts can be–at what point does a pond become a lake, or what temporal constraints exist regarding lake-hood? Ontological and epistemological research is clearly a necessary step in addressing uncertainty in GIS applications.

Visualizing Thoughts on Geospatial Information Uncertainty: What We Know and What We Need to Know (MacEachren et al.)

Saturday, November 18th, 2017

The authors offer a clarification early on in the paper which I found useful; “When inaccuracy is known objectively, it can be expressed as error; when it is not known, the term uncertainty applies “. This definition sounds like it pertains to measurement, but I don’t know how one would distinguish between error and uncertainty when it comes to visualization, another focus of this paper. I also believe it is important to further classify within “error”, the various sources of error whether they be human, machine, statistical, etc. to give a holistic impression of the (in)accuracy of attained results.

I would have liked to see a discussion of accuracy versus precision and how the concept of  uncertainty would apply to the precision of points in a dataset, ie. the degree to which the points relate to each other regardless of how they capture an absolute (ideal) value.

I liked how the authors drew on multiple discipline to illustrate how the concept of uncertainty is pertinent to many fields, drawing on Tversky and Economical/ Psychological theory to illustrate that “humans are typically not adept at using statistical information in the process of making decisions.” (141) The arguments put forth about how to depict uncertainty visually were very nuanced, from whether this would change individual’s decision-making when consulting a map, and whether it would lead to better decisions or just reduce the reliability of the data presented.

Furthermore, it makes sense that the theories and frameworks of mapping uncertainty are more well developed when it comes to traditional GIS mapping and less so in the domain of geographic data visualizations. I found the Figure 2 to be useful in teasing out how the concept of uncertainty would apply to different facets of a given project.

The challenge of representing uncertainty for dynamic information (which I think it becoming more and more crucial for streaming and big data) is definitely a big one and I’m interested to see how this field develops.

-FutureSpock

 

Thoughts on “Geographical information science: critical GIS” (O’Sullivan 2006 )

Saturday, November 18th, 2017

We have discussed the importance of terminology in previous weeks, and O’Sullivan hints at the elusive nature of capturing a phenomenon when he states the topic of his paper as the “curious beast known at least for now as ‘critical GIS”. (page 782) He further states that there is little sign of a groundswell of critical human geographers wholeheartedly embracing GIS as a tool of their trade. I think this has changed.

In comparing different critiques of GIS, he states that more successful examples of critically informed GIS are those where researchers informed by social theory have been willing to engage with the technology, rather than to criticize from the outside. I agree with this and think it makes sense that some knowledge of the procedures of GIS  how they work is required to illustrate how they can be manipulated to produce subjective results.

On page 784, O’Sullivan states that “Criticism of the technology is superficial”, but neglects to mention what would constitute more profound and constructive criticism. O’Sullivan does not explicate, but refers to Ground Truth and the important contributions made in that book pertaining to ethical dilemmas and ambiguities within GIS. It is interesting to note that much of the “brokering” that went on in the early days, which allowed for reconciliation between social theorists and the GIS community, came from institutions and “top-down” organizing as opposed to a more grass roots discussion, say on discussion boards or online communities/groups.

O’Sullivan notes that “PPGIS is not a panacea, and must not undermine the robust debate on the political economy of GIS, its epistemology, and the philosophy and practice of GIScience’”, and I very much agree with this statement. Although the increased use of PGIS addresses one of the foremost critiques of the applicability of GIS to grassroots communities and movements, it is not a simple goal which can be achieved and considered “solved.” Rather, the increased involvement of novices in GIS and spatial decision-making processes raises a host of new issues for the field of Critical GIS.

-FutureSpock

 

On Roth (2009) and Uncertainty

Friday, November 17th, 2017

It was super interesting to learn about the differences between certain words that, outside of GISci/geography/mathematics, are often equivalent, like vagueness, ambiguity, etc. Knowing more about McEachern’s methodology would have definitely been helpful, but I’m looking forward to hearing more in Cameron’s talk!

I thought it was weird that their central argument about uncertainty was with 6 floodplain mappers in a focus group. I would think that a focus group would be an interesting setting, since people can change their minds or omit what they were thinking due to contributions from another, or from the impostor syndrome, or both. Also, 6 people is not really enough to test a theory. Roth argues that it was negligible since the 6 were experts, but I would have liked to hear more about what actually made them experts, if they actually used any of these steps outlined by McEachern to reduce uncertainty, and also I would have been interested in why Roth actually considers this small (thus biased) subset of GIScientists as negligible.

Also, does anyone actually use this methodology in determining/reducing uncertainty? I have never heard of it before, which is worrisome considering many do not take further GIS courses beyond the intro classes. I thought it was interesting that one of the respondents said that representing uncertainty on their final products brought skepticism about their actual skills by their clients. It is a real issue, but also, people need to know that these maps aren’t always truthful — even as much as the map producer has tried — because of the fundamental multitude of issues of representing all the data accurately, from data collection to 2D/3D representation. So, though these experts lamented the concept of explaining this to laymen, would it really be that difficult to explain, especially considering the long-lasting benefits of the educational experience?

Optimal routes in GIS and Emergency Planning, Dunn & Newton (1992)

Sunday, November 12th, 2017

Dunn and Newton (1992) examine the performance of two popular approaches to network analysis, Dijkstra’s and out-of-kilter algorithms, in the context of population evacuation. At the time of publication, it’s clear that the majority of network analysis research has been conducted by computer scientists and mathematicians. It’s interesting how historical conceptualizations of networks, which appear to be explicitly non-spatial in the way that distortion or transformation are handled and the lack of integrated geospatial information, are transferable to GIS applications. What the authors describe as an “unnecessarily flexible” definition of a network for geographical purposes appears to be an insurmountable limitation of previous network conceptualizations for GIScience. However, I’ll admit that the ubiquity of Dijkstra’s algorithm in GIS software is a convincing argument for the usefulness of previous network concepts in GIS against my limited knowledge of network analysis.

The out-of-kilter algorithm provides a means to address the lack of integrated geospatial information in other network analysis methods. The authors demonstrate how one might incorporate geospatial concepts such as traffic congestion, one-way streets, and obstructions to enable geographic application more broadly. It’s striking that the processing time associated with network analysis is ultimately dependent on the complexity of the network. In the context of pathfinding, increased urban development and data availability will necessarily increase network complexity, and it was demonstrated in the paper how incorporating geographic information into a network can increase processing time. While it was unsurprisingly left out of a paper published in 1997, I would be curious to learn more about how heuristics might be applied to address computational concerns in the geoweb.