Archive for the ‘General’ Category

Thoughts on Geovisualization of Human Activity… (Kwan 2004)

Sunday, November 26th, 2017

The immediate discussion of the historical antecedents for temporal GIS by Swedish geographers uses the 24-hour day as a “sequence of temporal events” but I wonder why this unit of measurement was chosen as opposed to 48-hours or a week to illustrate the periodicity of temporal events, which may not be captured at the daily scale. It is interesting to note the gendered differences that are made visible by studies of women’s and mens spatio-temporal activities. As the authors note, “This perspective has been particularly fruitful for understanding women’s everyday lives because it helps to identify the restrictive effect of space-time constraints on their activity choice….” I am curious about how much additional data researchers must collect to formulate hypotheses about why women follow certain paths to work or are typically present at certain locations at certain times. I am also curious about how this process is different when trying to explain the spatiotemporal patterns observed in men’s travel behaviour.

One of the primary challenges identified by the authors is the lack of fine-grain individual data relating to peoples’ mobility in urban environments, such as in transportation systems or their daily commutes. This paper was written in 2004 and now, with the rapid increase in streaming, GPS from mobile devices, and open big data sets for most large cities, this is less of a concern. The big challenge these days is probably in parsing the sheer quantity of data with appropriate tools and hypotheses to identify key trends and gain usable insights about resident’s travel behaviour.

The methodology used by the researchers for their study of Portland relied on self-reported behaviour in the form of  a two-day travel study. There are many reasons why the reported data might be unreliable or unusable, especially given the fallibility  of time estimation and tendency to under or over report travel times based on mode of transport, mood, memory of the event, etc. That being said, this is probably the most ethical mode of data collection and asks for explicit consent. I would be interested to know how the researchers cross referenced the survey data with their information about the Portland Metropolitan Region, as well as the structure of the survey.

-FutureSpock

 

 

Kwan & Lee : Geoviz of Human Activity Patterns using 3D GIS

Sunday, November 26th, 2017

 

Having covered my talk on VGI and the implications of real-time tracking of individuals in space time, I found Kwan & Lee’s (2003) use of temporal GIS quite refreshing and a very unique and insightful study. In overtly using temporal GIS with such a large study group (7,090 households), this data collected goes from quantitative x,y & timestamp data, to very nuanced qualitative data when paired with contextual information, and compared against different study groups. I found this comparison between men/women and minority/Caucasian  everyday paths fascinating, and see how it could be used in a critical GIS lens to further analyse why these trends occur, and to empower these under represented groups in the realm of GIS.

I also found the use of 3D visualization very interesting (though to be expected) as you move from a traditionally planar form of GIS (x and y coordinates), to adding a third temporal attribute on the z axis. The papaer then delves into the intricacies of dealing with appropriate ways to display essentially a new form of GIS in an effective visualization, which poses a whole new range of issues vis à vis our Geovisualization talk by Sam. However, this extra z-attribute of time can be used for many new analyses using kernel functions to generate density maps to standardize comparisons of movement between individuals. This collection of movement data and analysis behind I find amazing, though also very scary when paired with the knowledge that such analysis could (and probably is) collected on a daily basis for not-so-critical or academic reasons, though rather targetted advertising and defense reasons in a form of coerced VGI.

All in all however, I find temporal GIS could be it’s own field in the creation of highly detailed datasets that can reveal much more than just location, and could aid in the creation of many tools and make for very rich data.

-MercatorGator

Marceau (1999)-The scale issue in social and natural sciences

Saturday, November 25th, 2017

This article is very interesting, and addresses what I think is a major issue within GIScience-the scale issue. Marceau (1999) lays out the “scale problem” (2), and provides a thorough review of solutions (and their limitations) from the literature. I also enjoy the before last paragraph of the paper, which suggests that the “methodological developments are certainly contributing to the emergence of a new paradigm: a science of scale” (12). While reading the paper, I wondered how this fit into the tool/science debate, and though I would tend to think of it as an important component within GIScience, I might not have considered “the science of scale” on its own, so it’s nice to see how the author clearly feels.

This issue seems omnipresent throughout geography (human and physical), and I know that I’ve had to deal with it within my own work. For example, my data collection will consist of me flying a UAV at a specific height (in order to achieve maximum photo resolution), thereby taking photos at specific scales. I will then create a model to make maps at specific scales. Beyond this, the maps I make will hopefully tell me things about the morphology of the landscape: will this be true only of Eureka Sound, or will it be generalizable to all of Ellesmere Island, or even all of the Canadian or International high Arctic? I do not find that any of the methods described in this paper provide a clear way to give a definitive answer on cross-scale inferences, which is to be expected. I think that as researchers, we must do our best to limit our inferences to the analyzed scales, and resist temptations to overgeneralize our results for increased importance. I am curious how things have changed in the nearly 20 years since this article has been published, what strides have been made, and what remains to be done.

Scale (Goodchild & Proctor 1997)

Saturday, November 25th, 2017

Prior to reading this paper I went in knowing scale was a key concept of geography, and one of much debate. After reading Goodchild & Proctor 1997 however, I feel this was an understatement. The authors extensively cover a much needed recap of traditional cartography, and the initial concreteness of scale and the common metrics used (i.e. buildings aren’t typically shown at a 1:25000 scale). I found this part especially interesting as it’s something that I never encountered in my GIS/geography classes, even though they’re key concepts in cartography. This becomes especially interesting when paired with their allusion to current day GIS acting as a visual representation of a large database (like OSM), and interestingly I thought of how OSM must have studied these concepts in creating their online mapping platform, as to only incorporate points of interest at a certain zoom level versus streets. The paper then goes to explain how concepts as such are needed in modern day digital maps in the form of Minimum Mapping Units (MMU), though how issues like raster resolution begin to define scale as the smallest denomination of measurement.

Another key point to the paper was the use of metaphors to describe how scale comes to play in traditional versus modern maps, and how is often redefined (such as in fields like geostatistics). I feel that the term scale should be kept as simple as possible to avoid running into issues like the modifiable areal unit problem, and appropriateness of scale. Scale will always be an important part of GIScience, as it’s inherantly associated with distance and visualizing geographic space, and I feel that extensive research into issues of scale like this paper will be needed in the future when mapping goes further and further from its traditional cartographic roots, into the new realms of GIS like VGI, location based services, and augmented reality.

-MercatorGator

Kwan and Lee (2004) – Time geography in 3D GIS

Saturday, November 25th, 2017

In this article, Kwan and Lee (2004) explore 3D visualisation methods for human movement data. In the language of time-geography, which borrows from early C20th physics, space-time paths describe movements as sets of spacetime coordinates, which (if only two spatial dimensions are considered), can be represented along three spatial dimensions. These concepts have become a fundamental part of recent developments in navigation GIS and other GIScience fields. For instance, Google Maps considers the time at which a journey is planned to more accurately estimate its duration. 

While their figures represent a neat set of 3D geovisualisation examples, it might have been worthwhile to have discussed some of the associated challenges and limitations (e.g. obstructed view of certain parts of the data, the potential for misinterpretation when represented on a 2D page, user information overload, the necessity for interactivity etc.). Further, how does 3D visualisation compare with other representations of spacetime paths, such as animation?

More broadly, I didn’t fully understand the claim that time-geography (as conceived in the 1970s) was new in describing an individual’s activities as a sequence occurring in geographic space (i.e. a spacetime trajectory). Time hasn’t been entirely ignored in Geography contexts in the past (e.g. Minard’s map), neither has it been ignored in other disciplines. So does time-geography purely emphasise the importance of the time dimension in GIS research/ software, or does it provide a set of methods and tools that enables its integration into the geographic discipline? Is time-geography done implicitly when researchers include a time dimension in their analysese, or does it represent a distinct approach?
-slumley

Thoughts on Langran and Chrisman

Friday, November 24th, 2017

I found this conversation about temporal GIS to be a particularly interesting introduction to the topic of temporal GIS. This notion of GIS adds an extra dimension to my shifting idea of it being primarily based on the representation of maps to a tool in retaining and displaying data. Sinton (1978) notes that geographic data is based on theme, location and time, so it is interesting to note that all these notions can be reduced to digital quantification.
The authors do offer a hint at philosophical musings, but don’t delve deep into it. The simple notion of linear time was enough to spark the conversation of three separate ways of displaying temporal change. Though their idea of time is not marked exclusively by linearity, but they associate the concept into a topological understanding, based around temporal relationships one may have to one another. The three methods that were discussed seemed to meld representations of temporality with spatiality with each new method. The melding of the two dimensions may lead to an interesting discussion, as they are inseparable in GIS.
The authors choose not to delve into the topic of visualization, leaving it as ‘a problem to leave to future discussions’. I am doing my project on movement, so this notion of graphic representation felt like a clear framework to deal with the kind of data I’m handling.

Thoughts on Goodchild and Proctor – Scale in a Digital Geographic World

Friday, November 24th, 2017

Goodchild begins with a notion that originally shocked me. The metric scale was supposedly unsuitable in digital cartography. I had never considered this notion before, but he nonetheless present a convincing explanation of his reasoning. Confusion over what he means by scale is widespread, which either significance the spatial extent of the map or the level of granularity that the data represents. There has also been issues over what kind of information is appropriate to represent along this axis of scalability. Goodchild proposes a new dimension of scales that is more appropriate for computer programming.
He offers two different bases of scale.
The 1st is object model scales, which is based on the choice of objects that the GI scientist wishes to study. Typically, the smallest object studied would figure clearly on this map. The second would be the field models which would simply be the size of the pixel’s fields.
While I read the description of these last two models, I felt as if I’d reached the climax of some detective novel. Obviously, the object model sounded very much like vector, and the field model sounded very much like raster. I had never considered these two ways of data representation as a type of scale.
These two scales are most useful when handling data, but they are typically misunderstood for interpretation by non-experts. The metric system is still important for the visualization of this data, which often appears as exclusively spatial extent when finalizing a map and making it legible for a general public.
It is very interesting to see a paper creating a tangible link between traditional and digital cartographic models.

Lang ran and Chrisman (1988)- Temporal GIS

Friday, November 24th, 2017

I found the discussion on temporal geographic information by Langran and Chrisman (1988) interesting, as it teetered between being relevant, and being dated. On the one hand, time is still a difficult thing to express and visualize, both attached and unattached to spatial data, but it is still extremely important to convey. On the other hand, I wonder how much has changed with advancements in interactive and online maps, which can very easily show different temporal layers one after the other, or move through time on command. Moreover, technological advances in surveillance, like UAVs used in police surveillance or traffic control, will create a wealth of spatio-temporal data greatly surpassing the kind described in the paper, which would require much more sophisticated processing and computing. I wonder how much of this discussion is embedded  geovizualisation, and whether “temporal GIS” is a standalone subject, or rather an important component of Geovis/Critical GIS/VGI/PPGIS/data mining/UAVs/etc…

I enjoyed the very beginning of the article, where the ‘nature’ of time is discussed (time is infinite and linear) and cartographers (GISers?) “can sidestep debates on what time is, and instead focus on how best to represent its effects” (2). I would argue that the way in which it/its effects are represented can, in fact, inform and serve as an interpretation of time. If a spatial map attempts to represent a “ground truth”, can’t a temporal map represent a “time truth”?

This is one of my favourite memes, with a quote on time from HBOs True Detective (with Mathew McConaughey and Woody Harrelson)- very meta.

 

time-is-a-flat-circle

Marceau (1999) and Scale

Friday, November 24th, 2017

Marceau’s (1999) article does an excellent job of highlighting the significance of scale across both human and physical geography. This article made me think more deeply about the impacts of scale that I every have had to before, which points to a significant gap in my education in geography. While I have been taught about the MUAP and other various impacts of scale, I feel that these issues have been addressed in isolation or as a mere sidebar to other concepts. As indicated by this article, scale is a fundamental spatial consideration (and often a problem) that should be thoroughly addressed for any research project that considers space. The fact that all entities, patterns, and processes in space are associated with a particular scale (or the “scale dependent effect”) means that it cannot be ignored.

I was particularly interested by Marceau’s discussion of how scale differs between absolute and relative space. The operational and clearly defined idea of scale in absolute space is addressed much more often than the more ill-defined concept of scale in absolute space. I’ll admit that, even after rereading the paragraph several times, I’m still not sure what Marceau means in defining relative scale as “the window through which the investigator chooses to view the world” (p4). If this definition was not explicitly linked to scale, I would consider it to be referring to something more like investigator bias or investigative lens. How is this “window” connected to space? I would have appreciated an example to further clarify this.

On Marceau (1999) and “The Scale Issue”

Thursday, November 23rd, 2017

I really liked how in depth this article went, reviewing development of studies on scale that were outside of the author’s department/field of study. It really emphasizes that this is an issue that applies to both physical & human geography (and others who study geographic space), so it’s cool to see interdisciplinary efforts towards this. I think this article really could have benefited from a visual flowchart or something, just sketching out how these actions would actually work, since it would take me some time to think out how this would all actually work on a raster grid or with polygons or something. Also, I think this article provided some framework for how to consider scale in a research project, like by performing sensitivity analysis (p.7).

In 1999, when this was published, we didn’t have the geoweb, and I think it would be super interesting to learn about how scale issues have been solved/exacerbated by these new developments. Are there issues in this work that have actually been “solved” by the geoweb, or are there just an onslaught of new issues created (as well as the holdovers, like the ubiquitous MAUP)? Writing this blog post, I realize my work has been constantly plagued by issues of scale and yet it’s never required to be acknowledged in handing in an assignment (and therefore I have never really considered it in this depth/variety before). This is something I have to consider in my analysis of methods for my research project, so thank you (and interested in learning more on Monday)!

Schuurman (2006) – Critical GIS

Monday, November 20th, 2017

Schuurman discusses the shifting presence of Critical GIS in Geographic Information Science (GISc) and its evolving role in the development of the field. Among other obstacles, Schuurman identifies formalisation—the process by which concepts are translated into forms that are readable in a digital environment—as a key challenge to critical theoretical work gaining further traction in GISc.  

Critical GIS challenges the idea that information about a spatial object, system or process can be made ‘knowable’ in an objective sense; our epistemological lense always filters our view, and there is not necessarily a singular objective truth to be uncovered. Schuurman argues that this type of analysis, applied to GIS, has been provided to some extent by ontological GISc research. Contrastingly, this body of research presumes a limit to the understanding of a system, emphasising plurality and individuality of experience (e.g. the multiple perspectives represented in PPGIS research).

That said, previous analyses have fallen short in adequately acknowledging and addressing power relations, demographic inequalities, social control and marginalisation as part of the general design process in GIS. In particular, the translation between cognitive and database representations of reality requires explicit treatment in following research. These observations become increasingly relevant in the context of the rising integration of digital technologies in everyday life.

The paper raises the question of how Critical GIS can affect change on discipline and practice. Going beyond external criticism, critiques must reason within the discipline itself. I would ask how Critical GIS might also gain greater traction outside of academic settings (e.g. in influencing industrial practice of GISc)?
-slumley

Formalization Matters: Critical GIS and Ontology Research (Schuurman, 2006)

Monday, November 20th, 2017

This paper examines the reasons for the necessity of critical GIS in terms of formalizing the representation within GIScience. In other words, the author emphasizes the ontology research in GIScience. It is a fundamental and critical question when thinking about ontology. For critical GIS that concerning the influence of GIS in human and society, have standard theories of geospatial representation is necessary. The obvious reason is that technologies are no more value-neutral, it will be embedded in political life. The ontology research will build theoretical background for fitting GIS to the society for better tackling problem involving humans. It contributes to claiming GIScience as a scientific discipline. While as the author notes in the paper, critical GIS do not affect fundamental disciplines involves in GIScience. This may be because these disciplines have their own ontology systems or their purpose is not about the human and society. I believe ontology research in GIScience can refer to the relevant disciplines that have mature ontological researches.

However, I concern that the complexity of current geospatial information possibly hinders the ontology research. GIScientists are dealing with some information cannot be fully understood. For example, patterns embedded in big spatial information. Even we have algorithms to discover them, we cannot provide explanations sometimes. Investigating the causal relationships is more difficult than applying algorithms. That means we can produce cognitive models, conceptual representations, data presentation, and spatial concepts of it, but we cannot provide explanation. This is not acceptable because ontology research should be scientific with clear reasoning.

MacEachren – Visualizing Geospatial Information Uncertainty

Sunday, November 19th, 2017

Uncertainty relates well to the topic of critical GIS in the sense that it challenges the foundation of the process. However, it differs in its specificity to what is being deemed uncertain/what is being challenged. In this case we talk about error, accuracy and precision from results and data while assuming that the core concepts of error, accuracy, and precision are well defined to begin with.

The article does a great job relating uncertainty to a wide range of GIS examples and explaining proper procedures to deal with it. One point i would’ve liked to see expanded upon is the mathematical ways in which we can compute and account for uncertainty. The paper mentions how the calculation of uncertainty within an expression is ideal compared to fabrication of assumptions and use of stereotypes. However, There seems to be problems within this as well: ‘how are these values derived to begin with? are they estimated? would estimations not also have their own range of uncertainty?’. Further at what point are we certain about any of the data involved at any given point of the process; as assumptions and simplifications are made throughout the entire system from data collection to final product. Effectively, it can be a tricky concept to wrap your head around when it can be applied at so many stages of the process without having a clear idea of how these uncertainties may be translated and transformed between steps.

Visualizing Geospatial Information Uncertainty (MacEachren et al, 2005)

Sunday, November 19th, 2017

This paper reviews the studies about conceptualization, representation of geographic information uncertainty, as well as its influence in decision-making process.

Some typologies are reviewed and the author proposed a more comprehensive one. However, the explanation of each components in the typology is not so clear. For example, when explaining “interrelatedness”, the author uses an example of proving whether a story is authentic. I don’t think it is appropriate, and it makes me confuse this component with “lineage” even I know they are different. Besides, the author mention uncertainty is related to data quality and reliability. He can have more explicit statements to distinguish them, which will make readers more understand uncertainty.

There is an interesting question promoted in the paper. That is whether the representation of uncertainty will create new uncertainty. For me, the answer is yes. Representing uncertainty through some kinds of symbols is exactly a process of abstraction. There will be some information loss and new uncertainty happens. It is worth noting that even previous studies have evaluated some symbol can symbolization of uncertainty can lead to better decision-making. But they didn’t tell the audience the theory behind the use of these symbols. Or there may be no theoretical supports. GIScience involves interdisciplinary studies, the symbols proposed before cannot apply to all situations. How to choose appropriate symbols to represent uncertainty is important. Therefore, we should have theoretical supports for this.

For decision-making, different studies have different conclusions about the helpfulness of including uncertainty. It may or may not lead to better decisions. I will argue that more informed is not necessarily better. Some problems are complex enough, including uncertainty will disturb the judgements of decision-makers. They do not always wish for knowing everything.

 

 

Thoughts on McEachron (2005)

Sunday, November 19th, 2017

This paper discusses the ways in which uncertainty may best be quantified and then presented to decision-makers. As shown in one of the exercises in the paper with students tasked with choosing the new location of certain parks and airports, the ones that were exposed to the best uncertainty visualization methods typically made the best informed decision. McEachron presents a data matrix of which various different forms of uncertainty are presented, which range on the the precision of the data, to the completeness of the method. The study of uncertainty seems to rely entirely on academic honesty, and represents in very clear ways what’s missing from the study. The issues with this kind of study emerge when communication between academia and decision-makers take place. Often times, uncertainty can be mistaken for a case of faulty data, and if this is not presented adequately, can lead to a severe miscommunication.
A major problem that McEachron identifies is how to represent the varying different forms of uncertainty. The topic of geovisualization came to mind. Since geovisualization is concerned with the exploration and manipulation of data, perhaps it’s through this particular lens in which we can apply the multitudes of uncertainty dimensions to a geographic sense. As with many challenges which come with having non-academics engaging with this data, proper interfaces need to be established. Both on a technological sense and a human sense. So while issues with quantifying uncertainty remain at the forefront of this particular paper, I sense that these may have implications in the way uncertainty is subjectively perceived.

Thoughts on Schuurman (2006)

Sunday, November 19th, 2017

I’m unfamiliar with the field of critical GIS, but the divide between “real-worlders” and their critics is apparent in fields beyond GIScience. If there are so many voices of complaint about how knowledge representation reinforces power relations, those critics have to join forces with those developing ontologies and epistemologies. Ten years have passed since Schuurman’s article was written and I’m curious to know how an analysis of the GIS and LNCS literature would be different since 2004. I would also be curious to know if inclusion of marginalized voices has been evident in recent epistemological development, according to Schuurman.

Schuurman frequently comes back to the notion that formalization and GIS data models are highly abstracted versions of reality. She doesn’t make a case for making GIS output any less abstracted, or changing how geographic data is visualized. I agree with her solution, which seems to be much more meta. Developing alternative or more complex ontologies does not align with a linear view of progress in GIScience, but the need for inclusivity in our representation and interpretation of geographic knowledge is central to the expansion of access to GIS knowledge and technology across cultures.

It was interesting learning about the history of critical GIS as a sub-discipline. Schuurman perceives a declining influence of critical GIScience, partially due to the conceptual nature of the work. It appears that critique of GIS is happening across the entire field of GIScience, and the rising field of ontological/epistemological research is incorporating many of the tenets of traditional critical GIS in their reshaping of geographic knowledge representation. Schuurman’s title is very fitting, as she seems to be embracing the shift from conceptual critical GIS to a formalized (and more impactful) approach.

Roth (2009) Uncertainty

Sunday, November 19th, 2017

Uncertainty is a topic that I’ve always wondered about in GIS, especially when classifying an area from a raster grid that inherently has to have error if each pixel spans 30m squared in most LANDSAT images and DEMs. I was intrigued to find out uncertainty is an academic subject in GIScience literature, as well as the many issues that one runs into when looking at uncertainty in a GIS lens. I find Roth’s overview and critique of several typologies in uncertainty essential to the paper, and although there’s a lot to draw from it, one can pick and chose aspects from these definitions to try and grasp this convoluted (and ironically uncertain) topic.

I find Roth grasps the importance of uncertainty, and how it’s conveyed to the map interpreter through the qualitative research done with his focus groups. Comments like “You just have to assume the line you draw on the map is a hard and fast line … you’ve got to put the line somewhere”, and “If you put uncertainty on a map, it would probably draw undue attention” really struck me. I find this shows the disconnect between map maker (who plays the Columbus role in a sense, stating where things are), and the map reader who blindly trusts these maps are accurate, despite not fully understanding just how much of this authoritative map was made by the map maker just “drawing a line somewhere” for convenience. I feel this qualitative approach to having focus groups and coding their answers (very much a qualitative GIS technique) very interesting, and very much in sync with the authors comments on data quality and uncertainty, as you can interpret these answers in several ways.

All in all, I feel papers like this should be more prevalent, or at least have aspects transfer into different realms of GIScience as it’s paramount to understand when creating data that others will use in decision making. I feel that even if it may draw unwanted attention to your uncertainty and influence how decision makers view it, it should be noted that maps lie, as there’s often a blind trust associated with where things are when presented to people (both from a GIScience background and not).

-mercatorGator

O’Sullivan – Geographical Information Science: Critical GIS

Saturday, November 18th, 2017

I found this paper quite interesting and I found that it gave an adequate summary of the prime factors constituting the strands of debate in Critical GIS. I found the discussion over the acceptance of critical theory in the GI Science community quite surprising, with the seemingly flippant responses to the Ground Truth collection that was presented in the article. The book was published in the mid 90s, which seems to collide with a time period in which original GIS papers and tools were being developed along with a triumphalist fanfare over the new technology in its wake.
The following discussions of PGIS, qualitative GIS and privacy seem to take a more nuanced acceptance within the GI Science community. I assume that there was enough time for them to accept these critiques, but also it could have to do with the pace of technology at the time. On one hand, the ignored ciricisms may have had tangible effects by the early 2000s which could have been potential criticisms feel more tangible. There’s also that these new criticisms engaged directly with the technology. It seems that conversations about qualitative and public-driven data collection could only have properly have taken place within a context in which the technology would allow for it. While Kwan lamented the lack of social perspectives in GIS in 2004, my own perspective shows that these conversations are more visible today. Conversations critical of GIS practices have been commonplace within this class, which may point toward Schuurman and Kwan’s remark of a ‘new era of socially and politically engaged GIScience’.

Thoughts on Roth (2009)

Saturday, November 18th, 2017

The concept of uncertainty rarely occurs to me when looking at a map. In Roth’s article, he frequently refers to the visual representation of geographic information uncertainty, but doesn’t explain it in detail or give examples. He describes the different typologies of uncertainty categories from the literature. Roth makes a case for McEachren’s typology of uncertainty categories. His argument is based on the inclusion of all uncertainties “influential [to] decision-making,” interoperability, and quantifiability. Roth fails to explain why previous typologies were lacking in any of these categories, and it seems that McEachren’s list is simply broader.

Roth doesn’t give any methods for representing these uncertainties visually. The results from the focus group seemed to conclude that the largest gap in the reality-to-decision flow is representation. I found interesting the distinction made by participants between a textual disclaimer for uncertainty and a cartographic representation of uncertainty. I agree that a disclaimer allows the viewer to absorb the information presented with a grain of salt. I think that most users can understand the concept of uncertainty (even in a geographic context), but representation is the more apparent barrier.

Participants in the focus group also seemed to dismiss geographic uncertainty as something that should be disregarded. If this attitude is as common among decision-makers as the article supposes it to be, therein lies the problem. If it can be proven that decisions made acknowledging geographic uncertainty versus disregarding it are “better,” then decision-makers must be made aware of the discrepancy. Although Leithner and Buttenfield (2000) seem to prove that uncertainty representations expedited the decision-making process, the decision-makers involved in Roth’s focus group were not of the same mind, claiming that knowledge of uncertainty decreased their confidence in their decision. I think more research and education needs to take place among decision-makers and evaluating the validity of informed and uninformed decisions.

Thoughts on “Geographical information science: critical GIS” (O’Sullivan 2006 )

Saturday, November 18th, 2017

We have discussed the importance of terminology in previous weeks, and O’Sullivan hints at the elusive nature of capturing a phenomenon when he states the topic of his paper as the “curious beast known at least for now as ‘critical GIS”. (page 782) He further states that there is little sign of a groundswell of critical human geographers wholeheartedly embracing GIS as a tool of their trade. I think this has changed.

In comparing different critiques of GIS, he states that more successful examples of critically informed GIS are those where researchers informed by social theory have been willing to engage with the technology, rather than to criticize from the outside. I agree with this and think it makes sense that some knowledge of the procedures of GIS  how they work is required to illustrate how they can be manipulated to produce subjective results.

On page 784, O’Sullivan states that “Criticism of the technology is superficial”, but neglects to mention what would constitute more profound and constructive criticism. O’Sullivan does not explicate, but refers to Ground Truth and the important contributions made in that book pertaining to ethical dilemmas and ambiguities within GIS. It is interesting to note that much of the “brokering” that went on in the early days, which allowed for reconciliation between social theorists and the GIS community, came from institutions and “top-down” organizing as opposed to a more grass roots discussion, say on discussion boards or online communities/groups.

O’Sullivan notes that “PPGIS is not a panacea, and must not undermine the robust debate on the political economy of GIS, its epistemology, and the philosophy and practice of GIScience’”, and I very much agree with this statement. Although the increased use of PGIS addresses one of the foremost critiques of the applicability of GIS to grassroots communities and movements, it is not a simple goal which can be achieved and considered “solved.” Rather, the increased involvement of novices in GIS and spatial decision-making processes raises a host of new issues for the field of Critical GIS.

-FutureSpock