Scale (Goodchild & Proctor 1997)

November 25th, 2017

Prior to reading this paper I went in knowing scale was a key concept of geography, and one of much debate. After reading Goodchild & Proctor 1997 however, I feel this was an understatement. The authors extensively cover a much needed recap of traditional cartography, and the initial concreteness of scale and the common metrics used (i.e. buildings aren’t typically shown at a 1:25000 scale). I found this part especially interesting as it’s something that I never encountered in my GIS/geography classes, even though they’re key concepts in cartography. This becomes especially interesting when paired with their allusion to current day GIS acting as a visual representation of a large database (like OSM), and interestingly I thought of how OSM must have studied these concepts in creating their online mapping platform, as to only incorporate points of interest at a certain zoom level versus streets. The paper then goes to explain how concepts as such are needed in modern day digital maps in the form of Minimum Mapping Units (MMU), though how issues like raster resolution begin to define scale as the smallest denomination of measurement.

Another key point to the paper was the use of metaphors to describe how scale comes to play in traditional versus modern maps, and how is often redefined (such as in fields like geostatistics). I feel that the term scale should be kept as simple as possible to avoid running into issues like the modifiable areal unit problem, and appropriateness of scale. Scale will always be an important part of GIScience, as it’s inherantly associated with distance and visualizing geographic space, and I feel that extensive research into issues of scale like this paper will be needed in the future when mapping goes further and further from its traditional cartographic roots, into the new realms of GIS like VGI, location based services, and augmented reality.

-MercatorGator

Kwan and Lee (2004) – Time geography in 3D GIS

November 25th, 2017

In this article, Kwan and Lee (2004) explore 3D visualisation methods for human movement data. In the language of time-geography, which borrows from early C20th physics, space-time paths describe movements as sets of spacetime coordinates, which (if only two spatial dimensions are considered), can be represented along three spatial dimensions. These concepts have become a fundamental part of recent developments in navigation GIS and other GIScience fields. For instance, Google Maps considers the time at which a journey is planned to more accurately estimate its duration. 

While their figures represent a neat set of 3D geovisualisation examples, it might have been worthwhile to have discussed some of the associated challenges and limitations (e.g. obstructed view of certain parts of the data, the potential for misinterpretation when represented on a 2D page, user information overload, the necessity for interactivity etc.). Further, how does 3D visualisation compare with other representations of spacetime paths, such as animation?

More broadly, I didn’t fully understand the claim that time-geography (as conceived in the 1970s) was new in describing an individual’s activities as a sequence occurring in geographic space (i.e. a spacetime trajectory). Time hasn’t been entirely ignored in Geography contexts in the past (e.g. Minard’s map), neither has it been ignored in other disciplines. So does time-geography purely emphasise the importance of the time dimension in GIS research/ software, or does it provide a set of methods and tools that enables its integration into the geographic discipline? Is time-geography done implicitly when researchers include a time dimension in their analysese, or does it represent a distinct approach?
-slumley

Thoughts on Langran and Chrisman

November 24th, 2017

I found this conversation about temporal GIS to be a particularly interesting introduction to the topic of temporal GIS. This notion of GIS adds an extra dimension to my shifting idea of it being primarily based on the representation of maps to a tool in retaining and displaying data. Sinton (1978) notes that geographic data is based on theme, location and time, so it is interesting to note that all these notions can be reduced to digital quantification.
The authors do offer a hint at philosophical musings, but don’t delve deep into it. The simple notion of linear time was enough to spark the conversation of three separate ways of displaying temporal change. Though their idea of time is not marked exclusively by linearity, but they associate the concept into a topological understanding, based around temporal relationships one may have to one another. The three methods that were discussed seemed to meld representations of temporality with spatiality with each new method. The melding of the two dimensions may lead to an interesting discussion, as they are inseparable in GIS.
The authors choose not to delve into the topic of visualization, leaving it as ‘a problem to leave to future discussions’. I am doing my project on movement, so this notion of graphic representation felt like a clear framework to deal with the kind of data I’m handling.

Thoughts on Goodchild and Proctor – Scale in a Digital Geographic World

November 24th, 2017

Goodchild begins with a notion that originally shocked me. The metric scale was supposedly unsuitable in digital cartography. I had never considered this notion before, but he nonetheless present a convincing explanation of his reasoning. Confusion over what he means by scale is widespread, which either significance the spatial extent of the map or the level of granularity that the data represents. There has also been issues over what kind of information is appropriate to represent along this axis of scalability. Goodchild proposes a new dimension of scales that is more appropriate for computer programming.
He offers two different bases of scale.
The 1st is object model scales, which is based on the choice of objects that the GI scientist wishes to study. Typically, the smallest object studied would figure clearly on this map. The second would be the field models which would simply be the size of the pixel’s fields.
While I read the description of these last two models, I felt as if I’d reached the climax of some detective novel. Obviously, the object model sounded very much like vector, and the field model sounded very much like raster. I had never considered these two ways of data representation as a type of scale.
These two scales are most useful when handling data, but they are typically misunderstood for interpretation by non-experts. The metric system is still important for the visualization of this data, which often appears as exclusively spatial extent when finalizing a map and making it legible for a general public.
It is very interesting to see a paper creating a tangible link between traditional and digital cartographic models.

Lang ran and Chrisman (1988)- Temporal GIS

November 24th, 2017

I found the discussion on temporal geographic information by Langran and Chrisman (1988) interesting, as it teetered between being relevant, and being dated. On the one hand, time is still a difficult thing to express and visualize, both attached and unattached to spatial data, but it is still extremely important to convey. On the other hand, I wonder how much has changed with advancements in interactive and online maps, which can very easily show different temporal layers one after the other, or move through time on command. Moreover, technological advances in surveillance, like UAVs used in police surveillance or traffic control, will create a wealth of spatio-temporal data greatly surpassing the kind described in the paper, which would require much more sophisticated processing and computing. I wonder how much of this discussion is embedded  geovizualisation, and whether “temporal GIS” is a standalone subject, or rather an important component of Geovis/Critical GIS/VGI/PPGIS/data mining/UAVs/etc…

I enjoyed the very beginning of the article, where the ‘nature’ of time is discussed (time is infinite and linear) and cartographers (GISers?) “can sidestep debates on what time is, and instead focus on how best to represent its effects” (2). I would argue that the way in which it/its effects are represented can, in fact, inform and serve as an interpretation of time. If a spatial map attempts to represent a “ground truth”, can’t a temporal map represent a “time truth”?

This is one of my favourite memes, with a quote on time from HBOs True Detective (with Mathew McConaughey and Woody Harrelson)- very meta.

 

time-is-a-flat-circle

Marceau (1999) and Scale

November 24th, 2017

Marceau’s (1999) article does an excellent job of highlighting the significance of scale across both human and physical geography. This article made me think more deeply about the impacts of scale that I every have had to before, which points to a significant gap in my education in geography. While I have been taught about the MUAP and other various impacts of scale, I feel that these issues have been addressed in isolation or as a mere sidebar to other concepts. As indicated by this article, scale is a fundamental spatial consideration (and often a problem) that should be thoroughly addressed for any research project that considers space. The fact that all entities, patterns, and processes in space are associated with a particular scale (or the “scale dependent effect”) means that it cannot be ignored.

I was particularly interested by Marceau’s discussion of how scale differs between absolute and relative space. The operational and clearly defined idea of scale in absolute space is addressed much more often than the more ill-defined concept of scale in absolute space. I’ll admit that, even after rereading the paragraph several times, I’m still not sure what Marceau means in defining relative scale as “the window through which the investigator chooses to view the world” (p4). If this definition was not explicitly linked to scale, I would consider it to be referring to something more like investigator bias or investigative lens. How is this “window” connected to space? I would have appreciated an example to further clarify this.

Scale Issues in Social and Natural Sciences, Marceau (1999)

November 23rd, 2017

Marceau (1999) describes the significance of and solutions to the issue of scale as it relates to social and natural sciences. The articulation of fundamental principles was helpful in demonstrating the importance of scale as a central question in GIS. It’s clear that the question is particularly important now as we continue to develop a more nuanced appreciation for how observed trends might vary across different scales of analysis.

The discussion of domain of scale and scale threshold stood out to me. I can imagine how differences in the patterns observed between scales would be helpful for organization and analysis. I’m curious about how these observed thresholds would manifest in reality. Are they distinct? Vagueness in our conceptualization of geographic features and phenomena seems to be so prevalent throughout the built and natural environment. I would think that these concepts would somehow shape our analysis of scale in some way that would favour vagueness in the spatial scale continuum. Still, it’s conceivable that sharp transitions could be revealed through the process of scaling unrelated to any vague spatial concepts. An example might’ve made the existence of scale thresholds more obvious to me.

It was an interesting point that an understanding of the implications of the Modifiable Areal Unit Problem took notably longer to develop in the natural science community–perhaps because GIScience as we it now was only in it’s infancy? In any case, it’s another reminder of how significantly spatial concepts can differ between geographies.

On Marceau (1999) and “The Scale Issue”

November 23rd, 2017

I really liked how in depth this article went, reviewing development of studies on scale that were outside of the author’s department/field of study. It really emphasizes that this is an issue that applies to both physical & human geography (and others who study geographic space), so it’s cool to see interdisciplinary efforts towards this. I think this article really could have benefited from a visual flowchart or something, just sketching out how these actions would actually work, since it would take me some time to think out how this would all actually work on a raster grid or with polygons or something. Also, I think this article provided some framework for how to consider scale in a research project, like by performing sensitivity analysis (p.7).

In 1999, when this was published, we didn’t have the geoweb, and I think it would be super interesting to learn about how scale issues have been solved/exacerbated by these new developments. Are there issues in this work that have actually been “solved” by the geoweb, or are there just an onslaught of new issues created (as well as the holdovers, like the ubiquitous MAUP)? Writing this blog post, I realize my work has been constantly plagued by issues of scale and yet it’s never required to be acknowledged in handing in an assignment (and therefore I have never really considered it in this depth/variety before). This is something I have to consider in my analysis of methods for my research project, so thank you (and interested in learning more on Monday)!

On Kwan & Lee (2004) and the 3D visualization of space-time activity

November 22nd, 2017

This article was super interesting, as I find the topic of temporal GIS something that’s increasingly pressing in this day and age (and still challenging from the early 2000s).

The visualizations were really interesting, and it seems like they provided way more information faster than just analyzing the 2D movement (no time) would provide. Also, I thought it was incredible that the space-time aquarium (discussed as a prism based on the paths identified by Swedish sociologists) was only conceptualized (or written down, I guess) in 1970 and then realized in the late 1990s with GIS (and also better graphical interfaces of computers).

I thought it was interesting that Kwan & Lee mentioned that this was specifically used for vector data, so it would be interesting to find out more about the limitations of raster data (or perhaps, advances in temporal raster data analysis since 2004?) and the interoperability of raster and vector data. Further, the inclusion and acknowledgement of the lack of qualitative data was appreciated as well, as it provided a bit of a benchmark in the critical GIS history of the issues of qualitative data in something so quantitative. It seems like maybe this could have changed (or have become easier to visualize) in the last 13 years, so I’m looking forward to learning more about this. It would be cool to use this “aquarium” idea to click on individual lines and read a story/oral map of this person’s day, although that raises serious security concerns as the information (likely) describes day-to-day activities even if their name is not included publicly. Further, does the introduction of VR change this temporal GIS model? It would be super bizarre and super creepy (albeit more humanizing, maybe?) to do a VR walkthrough of somebody’s everyday life (although, we probably could get there with all the geo-info collected on us all the time with social media/smartphones!).

Schuurman (2006) – Critical GIS

November 20th, 2017

Schuurman discusses the shifting presence of Critical GIS in Geographic Information Science (GISc) and its evolving role in the development of the field. Among other obstacles, Schuurman identifies formalisation—the process by which concepts are translated into forms that are readable in a digital environment—as a key challenge to critical theoretical work gaining further traction in GISc.  

Critical GIS challenges the idea that information about a spatial object, system or process can be made ‘knowable’ in an objective sense; our epistemological lense always filters our view, and there is not necessarily a singular objective truth to be uncovered. Schuurman argues that this type of analysis, applied to GIS, has been provided to some extent by ontological GISc research. Contrastingly, this body of research presumes a limit to the understanding of a system, emphasising plurality and individuality of experience (e.g. the multiple perspectives represented in PPGIS research).

That said, previous analyses have fallen short in adequately acknowledging and addressing power relations, demographic inequalities, social control and marginalisation as part of the general design process in GIS. In particular, the translation between cognitive and database representations of reality requires explicit treatment in following research. These observations become increasingly relevant in the context of the rising integration of digital technologies in everyday life.

The paper raises the question of how Critical GIS can affect change on discipline and practice. Going beyond external criticism, critiques must reason within the discipline itself. I would ask how Critical GIS might also gain greater traction outside of academic settings (e.g. in influencing industrial practice of GISc)?
-slumley

MacEachren et al (2005) – Visualising uncertainty

November 20th, 2017

MacEachren et al evaluate a broad set of efforts made to conceptualise and convey uncertainty in geospatial information. Many real world decisions are made on the basis of information which contains some degree of uncertainty, and to compound the matter, there are often multiple aspects of uncertainty that need to be factored into analysis. The balance between effectively conveying this complexity and overloading analysts with visual stimuli can support or detract from decision making, and constitutes a key persisting challenge explored in this paper.

A central discussion that I found interesting was that surrounding visual representations of uncertainty. Early researchers in the field strove to develop or unearth intuitive metaphors for visualision. Aids such as ‘fuzziness’ and colour intensity could act to convey varying degrees of uncertainty present in a dataset, almost as an additional variable. In the context of our other topic this week, we could ask who these metaphors are designed to assist, and how the choice of metaphor could influence potential interpretations (e.g. for visual constructs like fuzziness and transparency, do different individuals perceive the same gradient scale?).

The authors draw on judgement and decision making literatures to distinguish expert decision makers who adjust their beliefs according to statistical analysese of mathematically (or otherwise) defined uncertainties, from non-experts, who often misinterpret probabilities and rely on heuristics to make judgements. It might have been worth clarifying what was meant by experts in this instance (individuals knowledgeable about a field, or about probability and decision making?). The Tversky and Kahneman (1974) paper cited actually found that often experts (per their own definition) are similarly susceptible to probabilistic reasoning errors, so this polarity may be less distinct than suggested. Like some of the other papers in the geovisualisation literature, I found there was a degree of vagueness in who the visualisation was for (is it the ‘analysts’ mentioned in the introduction, or the lay-people cited in examples?).
-slumley

Formalization Matters: Critical GIS and Ontology Research (Schuurman, 2006)

November 20th, 2017

This paper examines the reasons for the necessity of critical GIS in terms of formalizing the representation within GIScience. In other words, the author emphasizes the ontology research in GIScience. It is a fundamental and critical question when thinking about ontology. For critical GIS that concerning the influence of GIS in human and society, have standard theories of geospatial representation is necessary. The obvious reason is that technologies are no more value-neutral, it will be embedded in political life. The ontology research will build theoretical background for fitting GIS to the society for better tackling problem involving humans. It contributes to claiming GIScience as a scientific discipline. While as the author notes in the paper, critical GIS do not affect fundamental disciplines involves in GIScience. This may be because these disciplines have their own ontology systems or their purpose is not about the human and society. I believe ontology research in GIScience can refer to the relevant disciplines that have mature ontological researches.

However, I concern that the complexity of current geospatial information possibly hinders the ontology research. GIScientists are dealing with some information cannot be fully understood. For example, patterns embedded in big spatial information. Even we have algorithms to discover them, we cannot provide explanations sometimes. Investigating the causal relationships is more difficult than applying algorithms. That means we can produce cognitive models, conceptual representations, data presentation, and spatial concepts of it, but we cannot provide explanation. This is not acceptable because ontology research should be scientific with clear reasoning.

MacEachren – Visualizing Geospatial Information Uncertainty

November 19th, 2017

Uncertainty relates well to the topic of critical GIS in the sense that it challenges the foundation of the process. However, it differs in its specificity to what is being deemed uncertain/what is being challenged. In this case we talk about error, accuracy and precision from results and data while assuming that the core concepts of error, accuracy, and precision are well defined to begin with.

The article does a great job relating uncertainty to a wide range of GIS examples and explaining proper procedures to deal with it. One point i would’ve liked to see expanded upon is the mathematical ways in which we can compute and account for uncertainty. The paper mentions how the calculation of uncertainty within an expression is ideal compared to fabrication of assumptions and use of stereotypes. However, There seems to be problems within this as well: ‘how are these values derived to begin with? are they estimated? would estimations not also have their own range of uncertainty?’. Further at what point are we certain about any of the data involved at any given point of the process; as assumptions and simplifications are made throughout the entire system from data collection to final product. Effectively, it can be a tricky concept to wrap your head around when it can be applied at so many stages of the process without having a clear idea of how these uncertainties may be translated and transformed between steps.

Visualizing Geospatial Information Uncertainty (MacEachren et al, 2005)

November 19th, 2017

This paper reviews the studies about conceptualization, representation of geographic information uncertainty, as well as its influence in decision-making process.

Some typologies are reviewed and the author proposed a more comprehensive one. However, the explanation of each components in the typology is not so clear. For example, when explaining “interrelatedness”, the author uses an example of proving whether a story is authentic. I don’t think it is appropriate, and it makes me confuse this component with “lineage” even I know they are different. Besides, the author mention uncertainty is related to data quality and reliability. He can have more explicit statements to distinguish them, which will make readers more understand uncertainty.

There is an interesting question promoted in the paper. That is whether the representation of uncertainty will create new uncertainty. For me, the answer is yes. Representing uncertainty through some kinds of symbols is exactly a process of abstraction. There will be some information loss and new uncertainty happens. It is worth noting that even previous studies have evaluated some symbol can symbolization of uncertainty can lead to better decision-making. But they didn’t tell the audience the theory behind the use of these symbols. Or there may be no theoretical supports. GIScience involves interdisciplinary studies, the symbols proposed before cannot apply to all situations. How to choose appropriate symbols to represent uncertainty is important. Therefore, we should have theoretical supports for this.

For decision-making, different studies have different conclusions about the helpfulness of including uncertainty. It may or may not lead to better decisions. I will argue that more informed is not necessarily better. Some problems are complex enough, including uncertainty will disturb the judgements of decision-makers. They do not always wish for knowing everything.

 

 

Critical GIS – Schuurman 2006

November 19th, 2017

Critical GIS allows us as users of GIS to better understand how it works and relates to the world around us; how theory’s  are manifested in space, how knowledge is coded, how easy it is to skim over things, and what i think is most important is the validity and praise that we give to our glorious GIS. I think that this concept is something that I wish that I had better understood when I started using GIS programs. We are taught in class and labs about how we can use the software to perform all sorts of tasks for us but we didn’t comment much on the actual foundations of which the software is built upon.

The article highlights how by critically evaluating the foundation of our concepts and techniques applied we can better apply value to the results that arise from our projects. It reminds me of the expression ‘Garbage in – Garbage out’; it doesn’t matter how well you perform a project within the software if the software itself is flawed and you fail to realize that.

I think that one of the issues thats hard to touch on is thinking about how we can improve many of these foundational concepts and apply that in a useful way in the formalized knowledge. Improving on core concepts is only the first hurdle, the second involves finding new ways to code this information in a manner that better expresses what may be minuscule changes to the definitions and ideas. This may be especially challenging since writing code can lead to generalized functions and abstraction.

Thoughts on McEachron (2005)

November 19th, 2017

This paper discusses the ways in which uncertainty may best be quantified and then presented to decision-makers. As shown in one of the exercises in the paper with students tasked with choosing the new location of certain parks and airports, the ones that were exposed to the best uncertainty visualization methods typically made the best informed decision. McEachron presents a data matrix of which various different forms of uncertainty are presented, which range on the the precision of the data, to the completeness of the method. The study of uncertainty seems to rely entirely on academic honesty, and represents in very clear ways what’s missing from the study. The issues with this kind of study emerge when communication between academia and decision-makers take place. Often times, uncertainty can be mistaken for a case of faulty data, and if this is not presented adequately, can lead to a severe miscommunication.
A major problem that McEachron identifies is how to represent the varying different forms of uncertainty. The topic of geovisualization came to mind. Since geovisualization is concerned with the exploration and manipulation of data, perhaps it’s through this particular lens in which we can apply the multitudes of uncertainty dimensions to a geographic sense. As with many challenges which come with having non-academics engaging with this data, proper interfaces need to be established. Both on a technological sense and a human sense. So while issues with quantifying uncertainty remain at the forefront of this particular paper, I sense that these may have implications in the way uncertainty is subjectively perceived.

Thoughts on Schuurman (2006)

November 19th, 2017

I’m unfamiliar with the field of critical GIS, but the divide between “real-worlders” and their critics is apparent in fields beyond GIScience. If there are so many voices of complaint about how knowledge representation reinforces power relations, those critics have to join forces with those developing ontologies and epistemologies. Ten years have passed since Schuurman’s article was written and I’m curious to know how an analysis of the GIS and LNCS literature would be different since 2004. I would also be curious to know if inclusion of marginalized voices has been evident in recent epistemological development, according to Schuurman.

Schuurman frequently comes back to the notion that formalization and GIS data models are highly abstracted versions of reality. She doesn’t make a case for making GIS output any less abstracted, or changing how geographic data is visualized. I agree with her solution, which seems to be much more meta. Developing alternative or more complex ontologies does not align with a linear view of progress in GIScience, but the need for inclusivity in our representation and interpretation of geographic knowledge is central to the expansion of access to GIS knowledge and technology across cultures.

It was interesting learning about the history of critical GIS as a sub-discipline. Schuurman perceives a declining influence of critical GIScience, partially due to the conceptual nature of the work. It appears that critique of GIS is happening across the entire field of GIScience, and the rising field of ontological/epistemological research is incorporating many of the tenets of traditional critical GIS in their reshaping of geographic knowledge representation. Schuurman’s title is very fitting, as she seems to be embracing the shift from conceptual critical GIS to a formalized (and more impactful) approach.

Roth (2009) Uncertainty

November 19th, 2017

Uncertainty is a topic that I’ve always wondered about in GIS, especially when classifying an area from a raster grid that inherently has to have error if each pixel spans 30m squared in most LANDSAT images and DEMs. I was intrigued to find out uncertainty is an academic subject in GIScience literature, as well as the many issues that one runs into when looking at uncertainty in a GIS lens. I find Roth’s overview and critique of several typologies in uncertainty essential to the paper, and although there’s a lot to draw from it, one can pick and chose aspects from these definitions to try and grasp this convoluted (and ironically uncertain) topic.

I find Roth grasps the importance of uncertainty, and how it’s conveyed to the map interpreter through the qualitative research done with his focus groups. Comments like “You just have to assume the line you draw on the map is a hard and fast line … you’ve got to put the line somewhere”, and “If you put uncertainty on a map, it would probably draw undue attention” really struck me. I find this shows the disconnect between map maker (who plays the Columbus role in a sense, stating where things are), and the map reader who blindly trusts these maps are accurate, despite not fully understanding just how much of this authoritative map was made by the map maker just “drawing a line somewhere” for convenience. I feel this qualitative approach to having focus groups and coding their answers (very much a qualitative GIS technique) very interesting, and very much in sync with the authors comments on data quality and uncertainty, as you can interpret these answers in several ways.

All in all, I feel papers like this should be more prevalent, or at least have aspects transfer into different realms of GIScience as it’s paramount to understand when creating data that others will use in decision making. I feel that even if it may draw unwanted attention to your uncertainty and influence how decision makers view it, it should be noted that maps lie, as there’s often a blind trust associated with where things are when presented to people (both from a GIScience background and not).

-mercatorGator

Critical GIS and Ontology Research, Schuurman (2006)

November 18th, 2017

The Schuurman (2006) article presents the emergence of critical GIS, criticisms of early GIS research that necessitated its conception, and its importance to the discipline of GIScience more broadly. It was interesting to get a glimpse of how critical GIS relates to a number of GIScience topics we’ve already begun to cover, and I think the summary of emergent themes in critical GIS provided and excellent primer for next week’s lecture.

There’s a parallel to be drawn between the synthesis of human geography and geographic techniques to form critical GIScience and the emergence of environmental studies as an integration of environmental and social science principles. For instance, the domain of ecology alone is ill-equipped to handle conservation issues related to resource management. It’s the introduction of sociological principles that enables critique of an antiquated form of environmentalism that might value biodiversity over livelihoods. I’m convinced of the importance of critical theoretical work in supplementing a mechanistic approach to geography.

I was glad to see the topic of vagueness make an appearance! I think the author’s discussion of uncertain conceptual spaces does well to demonstrate the importance of human geography concepts to what Sparke (2000) might refer to as “real-worlders.” It’s sometimes easy to forget how poorly defined some physical geographic concepts can be–at what point does a pond become a lake, or what temporal constraints exist regarding lake-hood? Ontological and epistemological research is clearly a necessary step in addressing uncertainty in GIS applications.

O’Sullivan – Geographical Information Science: Critical GIS

November 18th, 2017

I found this paper quite interesting and I found that it gave an adequate summary of the prime factors constituting the strands of debate in Critical GIS. I found the discussion over the acceptance of critical theory in the GI Science community quite surprising, with the seemingly flippant responses to the Ground Truth collection that was presented in the article. The book was published in the mid 90s, which seems to collide with a time period in which original GIS papers and tools were being developed along with a triumphalist fanfare over the new technology in its wake.
The following discussions of PGIS, qualitative GIS and privacy seem to take a more nuanced acceptance within the GI Science community. I assume that there was enough time for them to accept these critiques, but also it could have to do with the pace of technology at the time. On one hand, the ignored ciricisms may have had tangible effects by the early 2000s which could have been potential criticisms feel more tangible. There’s also that these new criticisms engaged directly with the technology. It seems that conversations about qualitative and public-driven data collection could only have properly have taken place within a context in which the technology would allow for it. While Kwan lamented the lack of social perspectives in GIS in 2004, my own perspective shows that these conversations are more visible today. Conversations critical of GIS practices have been commonplace within this class, which may point toward Schuurman and Kwan’s remark of a ‘new era of socially and politically engaged GIScience’.