A FRAMEWORK FOR TEMPORAL GEOGRAPHIC INFORMATION (Langran & Chrisman, 1987)

November 27th, 2017

In this paper, the authors discuss the components of cartographic time and describe three methods of conceptualizing geographic temporality. The discussion is heavily based on traditional relation database perspectives (i.e., database lacking temporal considerations). Although it facilitates studying geographic temporality, the limitation of referring to traditional methods is obvious. For example, the consistency of temporally-changeable data is hard to promise even in a database with space-time composite. Moreover, this paper seems old for current research since the traditional database perspective is old.

 

According to the authors, the three important components of cartographic time are the difference between world time and database time, the relationship between version and state, and the interrelationships between object versions. However, the world time and database time nowadays are not much different in real-time data project. The high rate of data collection also blurs the version difference. That said, versions seem not to be aware as states captured in real time. The interrelationships between object versions are more implicit. In other words, huge amount of sequential information collected for the object. The interrelationships are not obvious until we start mining them. Besides, the collected data are not always stored in a database (i.e., the form of datasets may not satisfy the paradigms). Therefore, the traditional methods apply to investigate geographic temporality is not the best choice in most situations. New algorithms and models are play important roles in current temporal geography.

Langran & Chrisman: Temporal Geographic information

November 27th, 2017

Temporal GIS introduces the concept of temporal topology coupled with the more common spatial topology to allow us to better understand the relevance of time in cartography. Effectively, while time is a constant/infinite progression, maps and cartography can only portray certain glimpses of space along a timeline; whether it dynamic or static maps provide a snapshot/window of time and space. If we consider a map that display location based services applications, we would see that this information only exists for a relatively short period of time on the relative geologic timescale. Even prior to the modern cellphone use, we would see sharp contrasts in the abundance of these services. With this temporal information, it is easy to situate individuals not only in space through LBS but also in time which may be seen as even more invasive to their privacy.

The article does a great job in explaining the core methods in which we apply temporal analysis in cartography. it does not however go into much detail over the limitations of use for applying these methods. I’m curious to what problems or bias could potentially arise while adding time to cartography. I believe that the pros would most likely outweigh the cons in this case but that’s my opinion after only having read this article on temporal GIS. One issue for using temporal GIS could be the increased volume of data resulting from the desire and/or need to use temporal data.

 

Marceau – 1999 – Scale Issue

November 27th, 2017

Marceau’s article provides a look at how geographers and other social scientists use and understand relative and absolute spatial scale. For geographers, scale may be a more well understood concept compared to others, but that does not necessarily mean that scale is more important to a geographer’s work than to an engineer for example. Scale is crucial in understanding how process and effects occur differently on different scales – the project needs to take scale into consideration to effectively its issues by tailoring the work to a certain extent. In the case of a study on geosurveillance & privacy, scale is important to understand the area you are evaluating to ensure that all areas within the extent are relevant.

The article highlights well the issues that should be considered in terms of spatial scale, however it frames them all as ‘issues’ or ‘problems’. I would’ve like to see the author comment on why these seem to be inherently bad things. As far as I’m concerned these concepts are good things; they allow us to better understand the space we are working in while providing a level of focus to projects. Although they are hurdles to deal with; if dealt with properly, it may ensure that the results minimize the uncertainties and redundancies that would otherwise occur.

The article is 18 years old but the concepts of scale are just as important to consider today even with the web 2.0 platform. If anything, it has become more important to deal with these issues since the variety of data has increased through big data and may result in MAUP issues.

Scale in a Digital Geographic World (Goodchild & Proctor, 1997)

November 27th, 2017

This paper discusses the problem of characterizing the level of geographic detail in digital form. That said, the traditional representative fraction seems useful but has many problems. Among these problems, I think assessing the fitness of data sets for particular use is most critical in practice. The authors argue that it is necessary to identify metric of level of geographic detail, but there is no perfect one can handle all the issues raised by the “legacy problem”. For example, for analyzing big data, traditional methods may be replaced by scale-free methods for segmentation. The evolving rate of technologies is much faster than 20 years ago, so the “legacy problem” will be more severe and frequent. Therefore, another requirement for the metric is sustainability. The metric itself should be readily updated to adapt to the new geographic environment.

For moving away from paper maps, having correspondent metaphors to the proposed metric is necessary, but it is harder than constructing the metric. To satisfy the requirements of being understood by a user lacking knowledge of conventions, the metaphors should be strictly straightforward. However, there is no rule for guiding the design of new metaphors. Following the traditions usually more efficient in practice although it will inherit the limits from paper maps. Metaphors for digital geographic world cannot be separated from its metric, but complete novel metaphors are not acceptable at this moment. In the transition from paper maps to digital maps, we always need to make a trade-off. Perhaps, when transition is completed, there are new technologies we need to adapt to. We will be always in the transition.

A Framework for Temporal Geographic Information, Langran and Chrisman (1998)

November 27th, 2017

Langran and Chrisman (1998) discuss the antecedents of temporal GIS, its core concepts, and a number of ways in which temporal geographic information is conceptualized. The map/state analogy was helpful for my understanding of the spatial and temporal parallels. I suppose the stage concept of time is fairly intuitive, but I appreciated its connection to maps explained explicitly. The authors seem comfortable with the convention of representing spatial boundaries as distinct lines, but I can imagine how similar concerns for vagueness and ambiguity might arise in temporal data as well.

The authors did a good job of presenting the advantage and limitations of geographic temporality concepts. At the beginning they mentioned how the “strong allegiance of digital maps to their analog roots” was inadequate for spatiotemporal analysis, but I’ll admit that I didn’t think that the two concepts they presented really subverted this allegiance very much. Still, maybe I’m spoiled by the ways  people are re-imagining maps on the geoweb–an unfair comparison for a 1998 paper.

It was interesting to get a glimpse of historical temporal GIS research. It’s clear that one of the biggest concerns in the implementation of a temporal GIS framework is temporal resolution. If I could hazard a guess, I would think that such concerns might evolve from interpolating between temporally distance information into the question of handling large amounts of data collected in rapid succession. With the advent of big data, namely by way of social media, I can imagine how the application of temporal GIS has and will proliferate since the time the article was published.

Thoughts on Geovisualization of Human Activity… (Kwan 2004)

November 26th, 2017

The immediate discussion of the historical antecedents for temporal GIS by Swedish geographers uses the 24-hour day as a “sequence of temporal events” but I wonder why this unit of measurement was chosen as opposed to 48-hours or a week to illustrate the periodicity of temporal events, which may not be captured at the daily scale. It is interesting to note the gendered differences that are made visible by studies of women’s and mens spatio-temporal activities. As the authors note, “This perspective has been particularly fruitful for understanding women’s everyday lives because it helps to identify the restrictive effect of space-time constraints on their activity choice….” I am curious about how much additional data researchers must collect to formulate hypotheses about why women follow certain paths to work or are typically present at certain locations at certain times. I am also curious about how this process is different when trying to explain the spatiotemporal patterns observed in men’s travel behaviour.

One of the primary challenges identified by the authors is the lack of fine-grain individual data relating to peoples’ mobility in urban environments, such as in transportation systems or their daily commutes. This paper was written in 2004 and now, with the rapid increase in streaming, GPS from mobile devices, and open big data sets for most large cities, this is less of a concern. The big challenge these days is probably in parsing the sheer quantity of data with appropriate tools and hypotheses to identify key trends and gain usable insights about resident’s travel behaviour.

The methodology used by the researchers for their study of Portland relied on self-reported behaviour in the form of  a two-day travel study. There are many reasons why the reported data might be unreliable or unusable, especially given the fallibility  of time estimation and tendency to under or over report travel times based on mode of transport, mood, memory of the event, etc. That being said, this is probably the most ethical mode of data collection and asks for explicit consent. I would be interested to know how the researchers cross referenced the survey data with their information about the Portland Metropolitan Region, as well as the structure of the survey.

-FutureSpock

 

 

Goodchild and Proctor (1997) – Scale in digital geography

November 26th, 2017

As might be expected, Goodchild and Proctor provide an insightful and lucid evaluation of how conceptions of scale should translate from paper to digital maps, and their analysis remains pertinent in the face of two decades of rapid digital cartographic development. They argue that the representative fraction, as traditionally used by cartographers to represent scale, is outdated for use in digital platforms.

Firstly, I think the representative fraction struggles on a simpler level. In absolute terms, we’d probably find it hard to distinguish 250,000 from 2,500,000, so maybe the large numbers involved with representative fractions would be less preferable to those present in alternatives, such as graphical scales, which visually show the relationship between distances on the map and the real world (as used in Google Maps).

It is interesting to revisit the problems outlined in the paper that have been faced by web map makers. A significant advance in the navigation of scale in digital environments has been in the development of tiled web maps. By replacing a single map image with a set of constituent raster or vector ‘tiles’ loaded by zooming and panning through a user interface, this method facilitates levels of detail that vary with zoom level and position in the map. The appearance and disappearance of certain features (e.g. country names vs town names) has formed another metaphor for scale recognition.

I’m still finding it hard to reconcile the idea of scale as used in everyday language (to represent the range of spatial extents that a phenomena operate within) with its scientific/ GISc definition (as a broader metric for the level of geographic detail, as well as extent). Positional accuracy, resolution, granularity etc are fundamentally important across disciplines, but do they correlate with what people think of when they talk about scale? (sorry Jin)
-slumley

Kwan & Lee : Geoviz of Human Activity Patterns using 3D GIS

November 26th, 2017

 

Having covered my talk on VGI and the implications of real-time tracking of individuals in space time, I found Kwan & Lee’s (2003) use of temporal GIS quite refreshing and a very unique and insightful study. In overtly using temporal GIS with such a large study group (7,090 households), this data collected goes from quantitative x,y & timestamp data, to very nuanced qualitative data when paired with contextual information, and compared against different study groups. I found this comparison between men/women and minority/Caucasian  everyday paths fascinating, and see how it could be used in a critical GIS lens to further analyse why these trends occur, and to empower these under represented groups in the realm of GIS.

I also found the use of 3D visualization very interesting (though to be expected) as you move from a traditionally planar form of GIS (x and y coordinates), to adding a third temporal attribute on the z axis. The papaer then delves into the intricacies of dealing with appropriate ways to display essentially a new form of GIS in an effective visualization, which poses a whole new range of issues vis à vis our Geovisualization talk by Sam. However, this extra z-attribute of time can be used for many new analyses using kernel functions to generate density maps to standardize comparisons of movement between individuals. This collection of movement data and analysis behind I find amazing, though also very scary when paired with the knowledge that such analysis could (and probably is) collected on a daily basis for not-so-critical or academic reasons, though rather targetted advertising and defense reasons in a form of coerced VGI.

All in all however, I find temporal GIS could be it’s own field in the creation of highly detailed datasets that can reveal much more than just location, and could aid in the creation of many tools and make for very rich data.

-MercatorGator

Marceau (1999)-The scale issue in social and natural sciences

November 25th, 2017

This article is very interesting, and addresses what I think is a major issue within GIScience-the scale issue. Marceau (1999) lays out the “scale problem” (2), and provides a thorough review of solutions (and their limitations) from the literature. I also enjoy the before last paragraph of the paper, which suggests that the “methodological developments are certainly contributing to the emergence of a new paradigm: a science of scale” (12). While reading the paper, I wondered how this fit into the tool/science debate, and though I would tend to think of it as an important component within GIScience, I might not have considered “the science of scale” on its own, so it’s nice to see how the author clearly feels.

This issue seems omnipresent throughout geography (human and physical), and I know that I’ve had to deal with it within my own work. For example, my data collection will consist of me flying a UAV at a specific height (in order to achieve maximum photo resolution), thereby taking photos at specific scales. I will then create a model to make maps at specific scales. Beyond this, the maps I make will hopefully tell me things about the morphology of the landscape: will this be true only of Eureka Sound, or will it be generalizable to all of Ellesmere Island, or even all of the Canadian or International high Arctic? I do not find that any of the methods described in this paper provide a clear way to give a definitive answer on cross-scale inferences, which is to be expected. I think that as researchers, we must do our best to limit our inferences to the analyzed scales, and resist temptations to overgeneralize our results for increased importance. I am curious how things have changed in the nearly 20 years since this article has been published, what strides have been made, and what remains to be done.

Scale (Goodchild & Proctor 1997)

November 25th, 2017

Prior to reading this paper I went in knowing scale was a key concept of geography, and one of much debate. After reading Goodchild & Proctor 1997 however, I feel this was an understatement. The authors extensively cover a much needed recap of traditional cartography, and the initial concreteness of scale and the common metrics used (i.e. buildings aren’t typically shown at a 1:25000 scale). I found this part especially interesting as it’s something that I never encountered in my GIS/geography classes, even though they’re key concepts in cartography. This becomes especially interesting when paired with their allusion to current day GIS acting as a visual representation of a large database (like OSM), and interestingly I thought of how OSM must have studied these concepts in creating their online mapping platform, as to only incorporate points of interest at a certain zoom level versus streets. The paper then goes to explain how concepts as such are needed in modern day digital maps in the form of Minimum Mapping Units (MMU), though how issues like raster resolution begin to define scale as the smallest denomination of measurement.

Another key point to the paper was the use of metaphors to describe how scale comes to play in traditional versus modern maps, and how is often redefined (such as in fields like geostatistics). I feel that the term scale should be kept as simple as possible to avoid running into issues like the modifiable areal unit problem, and appropriateness of scale. Scale will always be an important part of GIScience, as it’s inherantly associated with distance and visualizing geographic space, and I feel that extensive research into issues of scale like this paper will be needed in the future when mapping goes further and further from its traditional cartographic roots, into the new realms of GIS like VGI, location based services, and augmented reality.

-MercatorGator

Kwan and Lee (2004) – Time geography in 3D GIS

November 25th, 2017

In this article, Kwan and Lee (2004) explore 3D visualisation methods for human movement data. In the language of time-geography, which borrows from early C20th physics, space-time paths describe movements as sets of spacetime coordinates, which (if only two spatial dimensions are considered), can be represented along three spatial dimensions. These concepts have become a fundamental part of recent developments in navigation GIS and other GIScience fields. For instance, Google Maps considers the time at which a journey is planned to more accurately estimate its duration. 

While their figures represent a neat set of 3D geovisualisation examples, it might have been worthwhile to have discussed some of the associated challenges and limitations (e.g. obstructed view of certain parts of the data, the potential for misinterpretation when represented on a 2D page, user information overload, the necessity for interactivity etc.). Further, how does 3D visualisation compare with other representations of spacetime paths, such as animation?

More broadly, I didn’t fully understand the claim that time-geography (as conceived in the 1970s) was new in describing an individual’s activities as a sequence occurring in geographic space (i.e. a spacetime trajectory). Time hasn’t been entirely ignored in Geography contexts in the past (e.g. Minard’s map), neither has it been ignored in other disciplines. So does time-geography purely emphasise the importance of the time dimension in GIS research/ software, or does it provide a set of methods and tools that enables its integration into the geographic discipline? Is time-geography done implicitly when researchers include a time dimension in their analysese, or does it represent a distinct approach?
-slumley

Thoughts on Langran and Chrisman

November 24th, 2017

I found this conversation about temporal GIS to be a particularly interesting introduction to the topic of temporal GIS. This notion of GIS adds an extra dimension to my shifting idea of it being primarily based on the representation of maps to a tool in retaining and displaying data. Sinton (1978) notes that geographic data is based on theme, location and time, so it is interesting to note that all these notions can be reduced to digital quantification.
The authors do offer a hint at philosophical musings, but don’t delve deep into it. The simple notion of linear time was enough to spark the conversation of three separate ways of displaying temporal change. Though their idea of time is not marked exclusively by linearity, but they associate the concept into a topological understanding, based around temporal relationships one may have to one another. The three methods that were discussed seemed to meld representations of temporality with spatiality with each new method. The melding of the two dimensions may lead to an interesting discussion, as they are inseparable in GIS.
The authors choose not to delve into the topic of visualization, leaving it as ‘a problem to leave to future discussions’. I am doing my project on movement, so this notion of graphic representation felt like a clear framework to deal with the kind of data I’m handling.

Thoughts on Goodchild and Proctor – Scale in a Digital Geographic World

November 24th, 2017

Goodchild begins with a notion that originally shocked me. The metric scale was supposedly unsuitable in digital cartography. I had never considered this notion before, but he nonetheless present a convincing explanation of his reasoning. Confusion over what he means by scale is widespread, which either significance the spatial extent of the map or the level of granularity that the data represents. There has also been issues over what kind of information is appropriate to represent along this axis of scalability. Goodchild proposes a new dimension of scales that is more appropriate for computer programming.
He offers two different bases of scale.
The 1st is object model scales, which is based on the choice of objects that the GI scientist wishes to study. Typically, the smallest object studied would figure clearly on this map. The second would be the field models which would simply be the size of the pixel’s fields.
While I read the description of these last two models, I felt as if I’d reached the climax of some detective novel. Obviously, the object model sounded very much like vector, and the field model sounded very much like raster. I had never considered these two ways of data representation as a type of scale.
These two scales are most useful when handling data, but they are typically misunderstood for interpretation by non-experts. The metric system is still important for the visualization of this data, which often appears as exclusively spatial extent when finalizing a map and making it legible for a general public.
It is very interesting to see a paper creating a tangible link between traditional and digital cartographic models.

Lang ran and Chrisman (1988)- Temporal GIS

November 24th, 2017

I found the discussion on temporal geographic information by Langran and Chrisman (1988) interesting, as it teetered between being relevant, and being dated. On the one hand, time is still a difficult thing to express and visualize, both attached and unattached to spatial data, but it is still extremely important to convey. On the other hand, I wonder how much has changed with advancements in interactive and online maps, which can very easily show different temporal layers one after the other, or move through time on command. Moreover, technological advances in surveillance, like UAVs used in police surveillance or traffic control, will create a wealth of spatio-temporal data greatly surpassing the kind described in the paper, which would require much more sophisticated processing and computing. I wonder how much of this discussion is embedded  geovizualisation, and whether “temporal GIS” is a standalone subject, or rather an important component of Geovis/Critical GIS/VGI/PPGIS/data mining/UAVs/etc…

I enjoyed the very beginning of the article, where the ‘nature’ of time is discussed (time is infinite and linear) and cartographers (GISers?) “can sidestep debates on what time is, and instead focus on how best to represent its effects” (2). I would argue that the way in which it/its effects are represented can, in fact, inform and serve as an interpretation of time. If a spatial map attempts to represent a “ground truth”, can’t a temporal map represent a “time truth”?

This is one of my favourite memes, with a quote on time from HBOs True Detective (with Mathew McConaughey and Woody Harrelson)- very meta.

 

time-is-a-flat-circle

Marceau (1999) and Scale

November 24th, 2017

Marceau’s (1999) article does an excellent job of highlighting the significance of scale across both human and physical geography. This article made me think more deeply about the impacts of scale that I every have had to before, which points to a significant gap in my education in geography. While I have been taught about the MUAP and other various impacts of scale, I feel that these issues have been addressed in isolation or as a mere sidebar to other concepts. As indicated by this article, scale is a fundamental spatial consideration (and often a problem) that should be thoroughly addressed for any research project that considers space. The fact that all entities, patterns, and processes in space are associated with a particular scale (or the “scale dependent effect”) means that it cannot be ignored.

I was particularly interested by Marceau’s discussion of how scale differs between absolute and relative space. The operational and clearly defined idea of scale in absolute space is addressed much more often than the more ill-defined concept of scale in absolute space. I’ll admit that, even after rereading the paragraph several times, I’m still not sure what Marceau means in defining relative scale as “the window through which the investigator chooses to view the world” (p4). If this definition was not explicitly linked to scale, I would consider it to be referring to something more like investigator bias or investigative lens. How is this “window” connected to space? I would have appreciated an example to further clarify this.

Scale Issues in Social and Natural Sciences, Marceau (1999)

November 23rd, 2017

Marceau (1999) describes the significance of and solutions to the issue of scale as it relates to social and natural sciences. The articulation of fundamental principles was helpful in demonstrating the importance of scale as a central question in GIS. It’s clear that the question is particularly important now as we continue to develop a more nuanced appreciation for how observed trends might vary across different scales of analysis.

The discussion of domain of scale and scale threshold stood out to me. I can imagine how differences in the patterns observed between scales would be helpful for organization and analysis. I’m curious about how these observed thresholds would manifest in reality. Are they distinct? Vagueness in our conceptualization of geographic features and phenomena seems to be so prevalent throughout the built and natural environment. I would think that these concepts would somehow shape our analysis of scale in some way that would favour vagueness in the spatial scale continuum. Still, it’s conceivable that sharp transitions could be revealed through the process of scaling unrelated to any vague spatial concepts. An example might’ve made the existence of scale thresholds more obvious to me.

It was an interesting point that an understanding of the implications of the Modifiable Areal Unit Problem took notably longer to develop in the natural science community–perhaps because GIScience as we it now was only in it’s infancy? In any case, it’s another reminder of how significantly spatial concepts can differ between geographies.

On Marceau (1999) and “The Scale Issue”

November 23rd, 2017

I really liked how in depth this article went, reviewing development of studies on scale that were outside of the author’s department/field of study. It really emphasizes that this is an issue that applies to both physical & human geography (and others who study geographic space), so it’s cool to see interdisciplinary efforts towards this. I think this article really could have benefited from a visual flowchart or something, just sketching out how these actions would actually work, since it would take me some time to think out how this would all actually work on a raster grid or with polygons or something. Also, I think this article provided some framework for how to consider scale in a research project, like by performing sensitivity analysis (p.7).

In 1999, when this was published, we didn’t have the geoweb, and I think it would be super interesting to learn about how scale issues have been solved/exacerbated by these new developments. Are there issues in this work that have actually been “solved” by the geoweb, or are there just an onslaught of new issues created (as well as the holdovers, like the ubiquitous MAUP)? Writing this blog post, I realize my work has been constantly plagued by issues of scale and yet it’s never required to be acknowledged in handing in an assignment (and therefore I have never really considered it in this depth/variety before). This is something I have to consider in my analysis of methods for my research project, so thank you (and interested in learning more on Monday)!

On Kwan & Lee (2004) and the 3D visualization of space-time activity

November 22nd, 2017

This article was super interesting, as I find the topic of temporal GIS something that’s increasingly pressing in this day and age (and still challenging from the early 2000s).

The visualizations were really interesting, and it seems like they provided way more information faster than just analyzing the 2D movement (no time) would provide. Also, I thought it was incredible that the space-time aquarium (discussed as a prism based on the paths identified by Swedish sociologists) was only conceptualized (or written down, I guess) in 1970 and then realized in the late 1990s with GIS (and also better graphical interfaces of computers).

I thought it was interesting that Kwan & Lee mentioned that this was specifically used for vector data, so it would be interesting to find out more about the limitations of raster data (or perhaps, advances in temporal raster data analysis since 2004?) and the interoperability of raster and vector data. Further, the inclusion and acknowledgement of the lack of qualitative data was appreciated as well, as it provided a bit of a benchmark in the critical GIS history of the issues of qualitative data in something so quantitative. It seems like maybe this could have changed (or have become easier to visualize) in the last 13 years, so I’m looking forward to learning more about this. It would be cool to use this “aquarium” idea to click on individual lines and read a story/oral map of this person’s day, although that raises serious security concerns as the information (likely) describes day-to-day activities even if their name is not included publicly. Further, does the introduction of VR change this temporal GIS model? It would be super bizarre and super creepy (albeit more humanizing, maybe?) to do a VR walkthrough of somebody’s everyday life (although, we probably could get there with all the geo-info collected on us all the time with social media/smartphones!).

Schuurman (2006) – Critical GIS

November 20th, 2017

Schuurman discusses the shifting presence of Critical GIS in Geographic Information Science (GISc) and its evolving role in the development of the field. Among other obstacles, Schuurman identifies formalisation—the process by which concepts are translated into forms that are readable in a digital environment—as a key challenge to critical theoretical work gaining further traction in GISc.  

Critical GIS challenges the idea that information about a spatial object, system or process can be made ‘knowable’ in an objective sense; our epistemological lense always filters our view, and there is not necessarily a singular objective truth to be uncovered. Schuurman argues that this type of analysis, applied to GIS, has been provided to some extent by ontological GISc research. Contrastingly, this body of research presumes a limit to the understanding of a system, emphasising plurality and individuality of experience (e.g. the multiple perspectives represented in PPGIS research).

That said, previous analyses have fallen short in adequately acknowledging and addressing power relations, demographic inequalities, social control and marginalisation as part of the general design process in GIS. In particular, the translation between cognitive and database representations of reality requires explicit treatment in following research. These observations become increasingly relevant in the context of the rising integration of digital technologies in everyday life.

The paper raises the question of how Critical GIS can affect change on discipline and practice. Going beyond external criticism, critiques must reason within the discipline itself. I would ask how Critical GIS might also gain greater traction outside of academic settings (e.g. in influencing industrial practice of GISc)?
-slumley

MacEachren et al (2005) – Visualising uncertainty

November 20th, 2017

MacEachren et al evaluate a broad set of efforts made to conceptualise and convey uncertainty in geospatial information. Many real world decisions are made on the basis of information which contains some degree of uncertainty, and to compound the matter, there are often multiple aspects of uncertainty that need to be factored into analysis. The balance between effectively conveying this complexity and overloading analysts with visual stimuli can support or detract from decision making, and constitutes a key persisting challenge explored in this paper.

A central discussion that I found interesting was that surrounding visual representations of uncertainty. Early researchers in the field strove to develop or unearth intuitive metaphors for visualision. Aids such as ‘fuzziness’ and colour intensity could act to convey varying degrees of uncertainty present in a dataset, almost as an additional variable. In the context of our other topic this week, we could ask who these metaphors are designed to assist, and how the choice of metaphor could influence potential interpretations (e.g. for visual constructs like fuzziness and transparency, do different individuals perceive the same gradient scale?).

The authors draw on judgement and decision making literatures to distinguish expert decision makers who adjust their beliefs according to statistical analysese of mathematically (or otherwise) defined uncertainties, from non-experts, who often misinterpret probabilities and rely on heuristics to make judgements. It might have been worth clarifying what was meant by experts in this instance (individuals knowledgeable about a field, or about probability and decision making?). The Tversky and Kahneman (1974) paper cited actually found that often experts (per their own definition) are similarly susceptible to probabilistic reasoning errors, so this polarity may be less distinct than suggested. Like some of the other papers in the geovisualisation literature, I found there was a degree of vagueness in who the visualisation was for (is it the ‘analysts’ mentioned in the introduction, or the lay-people cited in examples?).
-slumley