Posts Tagged ‘uncertainty’

Are We Certain that Uncertainty is the Problem?

Thursday, April 4th, 2013

Unwin‘s 1995 paper on uncertainty in GIS was a solid overview of some of the issues with data representation that might fly under the radar or be assumed without further comment in day-to-day analysis.  He discussed vector (or object) and raster (or field) data representations, and the underlying error inherent in the formats themselves, rather than the data, per se.

While the paper itself is clear and fairly thorough, I can’t help but question whether error and uncertainty are worth fretting over. Of course there is error, and there will always be error in a digital representation of a real-world phenomenon. Those people, such as scientists and policy makers, who rely on GIS outputs, are not oblivious to these representation flaws. For instance, raster data is constrained by resolution. It is foolhardy to assume that the land cover in every inch of a 30-meter grid cell is exactly uniform. It is also wrong to suggest that some highly mobile data (like a flu outbreak) would remain stationary over the course of the interval between sensing/mapping. There are ways around this, such as spatial and temporal interpolation algorithms and other spatial statistics, and I feel like estimates are often sufficient. If they aren’t, then perhaps the problem isn’t with the GIS, but rather in the data collection. Better data collection techniques, perhaps involving more remote sensing (physical geography) or closer fieldwork (social geography) would go far in lessening error and uncertainty.

With all of that said, I am not about to suggest that GIS is perfect. There is always room for growth and improvement. But, after all, the ultimate purpose of visualizing data is for understanding and gaining a mental picture of what is happening in the real world. An error-free or completely “certain” data representation is not only impossible within human limitations, but it is not particular necessary.

– JMonterey

Visualizing Uncertainty: mis-addressed?

Wednesday, April 3rd, 2013

“Visualizing Geospatial Information Uncertainty…” by  MacEachren et al. presents a good overall view of geospatial information uncertainty and how to visualize it. However that said, many parts seemed to convey that all uncertainty must be defined in order to make correct decisions. In my realm of study, although it would be nice to eliminate or place uncertainty in a category, just the recognition that there is uncertainty is often definition enough to make informed decisions based on the observed trends. Furthermore, the authors seem to separate the different aspects of the environment or factors that lead to uncertainty and how it may be visualized. The use of many definitions and descriptions convolutes what is really the factors that result in uncertainty and the resulting issues with visualization. The way visualized uncertainty is presented greatly contrasts the ambiguity of the definitions behind uncertainty and its representation presented by the authors. The studies and ways uncertainty can be visualized is a great help in decision making and the recognition of further uncertainties.

One aspect that would have help in addressing uncertainty and its visualization would have been to integrate ideas and knowledge from the new emerging field of ecological stoichiometry, which looks at uncertainty, the flow of nutrients and energy, and the balance within ecosystems to answer and depict uncertainty. I believe that ecological stoichiometry would address many of the challenges in identification, representation and translation of uncertainty within GIS and help to clarify many problems. This stoichiometric approach falls along the scheme of the multi-disciplinary approach to uncertainty visualization described within the article.  However, as the article is limited to more generally understood approaches, rather than more complex ones, such as stoichiometry, do some of the proposed challenges in recognition and visualization of uncertainty not exist?  I would argue yes, but then again more challenges may arise in depiction, understanding and translation of uncertainty.


Fuzzy Spatial Data Queries and What It Means for Government

Thursday, March 15th, 2012

To be honest, I’d been at a loss for what to say differently about the second Goodchild et. al. article that I didn’t already say in regard to his book chapter. Then I began to think about Cyberinfrastructure’s post and his ideas about how uncertainty in spatial data queries can be determined by different types of scale (query scale, the segmentation scale, the data analysis scale, visualization scale) and how this problem can change with different levels of scale in and differing levels of uncertainty. Yet boiling the concept of this article down just to these abstract concepts didn’t help me in thinking about where this problem really matters – a matter Goodchild is concerned with when he talks about users of these data libraries.

So, I turned to YouTube. Don’t worry about watching the whole video.

As this video shows, users such as the government utilize geospatial data libraries quite frequently with a whole plethora of new uses. This form of uncertainty can really hinder efforts to modernize government and provide new services (impacting users like you and me).

An example from the film, when a dispatcher uses “On-Demand Pick-up” to dispatch a driver to someone who needs a ride, they better be sure their computer is picking up the same neighborhood as the caller is requesting from. If not, then they could be sending a driver from too far away to pick the person up. But how does this dispatcher get the caller to define a concrete place rather than abstract, vernacular-defined place name? It may seem just a simple question of language and communication skills. Perhaps it is.

But take another example from the film, where city administrators are able to provide real-time information to bus riders on the location of buses. How do they know what scale to provide this information to bus riders at? What if the user requires two bus lines to get where they are going? What happens if this data isn’t provided at a large enough scale to understand the placement of buses? A moot point, perhaps, as the bus will come when it comes. But certainly an important question for the people applying these GIS systems which rely on data libraries about the geographic areas where they operate. This becomes even more important when you think about technology such as LIDAR that operates at even larger scales and the methods used to define such scales of operation.

Scale, Uncertainty, and Spatial Data Libraries

Wednesday, March 14th, 2012

In the paper published by Goodchild et al. in 1998, authors presented the definition of spatial data libraries and demonstrated how user access the information by specifying multidimensional keys. Footprint was studied in details and authors also demonstrated how to model fuzzy regions in spatial data libraries. The corresponding implementations were discussed, as well as the visualization. Finally, the goodness of fit was delineated.

I find that the fuzzy modeling is directly related to the previous topics in our class, scale and uncertainty analysis. Most of the geospatial information in the spatial data libraries is modeled with probability, which contains uncertainty. But the magnitude of the uncertainty is largely (not completely) determined by the scale, including the query scale, the segmentation scale, the data analysis scale, visualization scale and others. Therefore, fuzzy modeling may change with respect to different scale and uncertainty.

For example, if we request the spatial information about “south China” in the CHGIS digital GIS library of Harvard University, the uncertainty in the footprint “south China” will cause unexpected results. Since there is no standard interpretation of “south China”, the places that different users choose to represent “south China” maybe different from each other to large extent. Moreover, since the scale in “south China” is not clearly specified, one may choose a city, a province, or even several provinces to represent “south China”. Therefore, we can see both scale and uncertainty play pivotal role in spatial data library queries, which should be taken into consideration with the design of spatial data libraries.


Uncertainty and confidence: walking a fine line

Friday, March 9th, 2012

Foody states important end-users tend to underestimate uncertainty. Their perceptions of uncertainty, in turn, could prompt greater problems. Yet, according to the article, “uncertainty is ubiquitous and is inherent in geographical data” (115). Thus, the blame game is hard to play when uncertainty can be argued as ‘inherent’. The topic becomes more convoluted when geographic information (GI) systems are assessed. “…the ease of use of many GI systems enables users with little knowledge or appreciation of uncertainty to derive polished, but flawed, outputs” (116). Thus, it is even more important to think twice about generated outputs. Foody mentions data mining’s methods that do not consider the “uncertainty that is inherent in spatial data” (111). We have a tendency to find patterns in data. Therefore, we could create the wrong patterns, but utilize the outcomes as accurate or next to flawless because of an overconfidence that is produced by the systems and databases we create. In addition, these systems have slippery slopes with potentially irreversible outcomes. It reminds me of the infamous case of the northern cod fishery collapse. Here, the problems in the science of the northern cod assessment entailed overestimation of biomass and underestimation of fishing mortality (by 50 percent!). I believe the overestimation and/or underestimation of variables are important to the way uncertainty is perceived. The awareness of uncertainty matters, as noted in the article. The higher the level of awareness is the better.

ClimateNYC mentions social errors in the ‘visualizing uncertainty’ post, which made me think of representation and scale issues. In particular, the decisions Google Maps makes with regards to representing marginalized communities. The Ambedkar Nagar slum in Mumbai, India is labelled in Google Maps as ‘Settlement’. However, ‘Settlement’ disappears at 200m altitude but is visible at 500m. Adding to the ambiguity, at 50m altitude, the Organization for Social Change is mapped but this point is not visible at any other scale. Who makes these decisions when labelling these areas? Is it political? Or do they simply not care?

-henry miller

Uncertainty: management and confidence

Friday, March 9th, 2012

Hunter and Goodchild’s article presented critical questions regarding spatial databases and uncertainty. One imminent question stood out: “How can the results be used in practice?” In the spatial databases produced, keeping track of errors is vital because it encourages transparency, and initiates a chain reaction. This management feature can lead to an increase in the quality of products. Thus, a better product is created through improved value judgements made. This, in turn, influences (and hopefully diminishes) the level of uncertainty it has. However, we must not get ahead of ourselves; “…while users at first exhibit very little knowledge of uncertainty this is followed by a considerable rise once they learn to question their products…” (59). Even though this spike in knowledge occurs, over time, knowledge tends to decrease once a threshold of familiarity and experience is reached. So users must remember to stay on top of their game and remain critical and aware of the products they make use of.

This is all relative to the type of decisions being made. Because the nature of decisions has such vast ranges and opposite spectrums of implications – from non-political to political, minimal risk to high risk – uncertainty is more apparent than ever. Heterogeneity, from data to decision implications, will exist; “…regardless of the amount by which uncertainty is reduced the task can never be 100 percent successful simply because no model will ever perfectly reflect the real world” (59). Therefore, we should settle with what it seems to be an unsettling thought. This makes me think of sah’s post. I also believe that we can find an element of comfort in uncertainty.

By “educating users to understand and apply error visualization tools”, will hopefully lead to improved decision-making pertaining to suitable data quality components and visualization techniques for products (61). Thus, a “lack of confidence in the system” (58) can be diminished; even better, confidence may be gained. Vexing characteristics of uncertainty need to be addressed in order for a clearer segue and a stronger link between theory and practice to exist.

-henry miller

Rethink Uncertainty in GIS research

Friday, March 9th, 2012

The first step for uncertainty management is the identification or quantitative measurement of uncertainty. In the paper published by Foody 2003, uncertainty is generally categorized into two types: ambiguity and vagueness. The former is the problem of making choice, while the latter is the challenges of making distinction. Then authors talks about the contemporary research in uncertainty, and they also give analysis about the failure to recognize uncertainty. But I think the most important challenge in this filed is how to quantitatively measure uncertainty, with probability theories, statistical tool, to mention a few here. Later author mentions data mining with uncertainty analysis, and we still need a systematic approach to measure the uncertainty in GIS data mining.

If we consider the process of geospatial data collection, data storage, data analysis and visualization, uncertainty analysis can be applied to each of them. Sometimes, transferring a large number of files across Internet can incur great errors, due to the uncertainty on the network and storage. Therefore, this kind of uncertainty in uncertainty should also be taken into account, especially when high reliability is required. Of course, how to analyze uncertainty in uncertainty is another great challenge.

Uncertainty can change with respect to different study areas or different research projects. For example, tiny vagueness in classification process may cause unpredicted problems, while its impact in visualization can be well handled. However, uncertainty should be handled very carefully, as ClimateNYC mentioned in his blog posts.


Transparency is convincing

Thursday, March 8th, 2012

Climate NYC’s post raises some good points. The skeptics who question climate change are probably in the minority in today’s trendy world (although perhaps that is just my opinion because I’m a geography and environment major). Although these opinions are for the most part unhelpful exaggerations, the use and necessity of the skeptic is undeniable. Without the skeptics politicians and others would be able to convince the public of anything. Al Gore exaggerates in the film An Inconvenient Truth and it hurts his reputation. Skeptics call him out on this and bring his study back down to earth.

The initial predictions that the IPCC reported were not very accurate. Their climate models did not do a successful job at predicting the impacts and rise in CO2; in fact the IPCC’s climate models underestimated these variables. That being said, the public does not want vague estimates. The public wants precise numbers. The IPCC therefore feels compelled to provide the public with quantitative information that may potentially break bad habits.

More recently, as shown by Climate NYC, the IPCC offers multiple climate change scenarios. Not only do they provide the graphs shown below, but they also show future estimates regarding temperature rise using different levels of CO2 reduction. These methods of visualizing error and uncertainty have a positive impact on the public. The transparency of these reports counteract the skeptics’ critiques benefit the research of the IPCC more than if they chose to exaggerate as a scare tactic.

Although perhaps not necessary in all academic reports, it would be interesting to see more visualization of error and uncertainty. Granted, some error and uncertainty cannot be quantified, rather more qualitatively listed, but it gives me the impression of a more honest report.



Climate Models, Uncertainty and Unmade Policy Decisions

Thursday, March 8th, 2012

I think we’d be remiss to cover the topic of uncertainty without thinking about the role that it plays when scientific research or other forms of data are transmitted from the academic or research realms into the public world of policy debates. As Giles M. Foody notes, problems with uncertainty “can lead to a resistance against calls for changes because of the uncertainties involved” (114). I think he’s right but this is a vast understatement.

In the climate change debate now swirling through most of the globe, uncertainty could be described as one of the main factors propelling so-called climate skeptics, naysayers and those generally unwilling to acknowledge that human energy consumption might be influencing the global climate. Just take a look at this skeptic’s blog post. He names uncertainty in climate science as the number one reason not to move too fast on this global, vexing issue. In fact, much of the opportunity for debate on the issue stems from varying points of view on just how certain the hundreds and thousands of climate models out there might be in predicting a warming world. In fact, just try Googling “climate” and “uncertainty” and you’ll find an avalanche of information – some more scientific than others.

Foody does a nice job of summarizing this paradigm when he writes about how “end-users and decision-makers often fail to appreciate uncertainty fully” (115). I couldn’t agree more. What most climate scientists will tell you is that while their models contain a great deal of uncertainty – which varies depending on what type of model your discussing or how it’s been parameterized – the overall trends are pretty clear. Most of the work done in this field concludes that a relationship does, in fact, exist between CO2 emissions and a warming global climate. Yet the importance of uncertainty, here, lies not within the scientific community but with publicly debated policy decisions where uncertainty/error can conveniently become a political football. I mean just look at some of the variation in predictions from climate models in the IPCC’s 2001 report:

Figure 1. A selection of climate models and their prediction of globally averaged surface air temperature change in response to emissions scenario A2 of IPCC Special Report on Emission Scenarios. CO2 is ~doubled present concentrations by year 2100. Figure reproduced from Cubasch et al. (2001).

Yes, there’s some definite variation between models, a degree of uncertainty. But how does this compare with the idea we discussed in class about scale. Can we ever expect to have complete accordance and certainty amongst climate models when the issue operates on such a vast, global scale? Should we expect it on smaller, regional scales with something as complex as the atmosphere’s inputs and ouputs and the sun’s radiation?


The hunt for knowledge and bear Foody

Thursday, March 8th, 2012

Foody mentions uncertainty with knowledge and discovery. I like how, although seemingly opposite, these elements work together. Before science and empirical reasoning became the norm (I’m talking wayyy back), uncertainty was often explained through legends, ghosts or religious anecdotes. These were ways to calm people. Sea monsters used to live far out to sea and if you sailed too far you would fall of the earth. Without the resources available to man (and women), these explanations were all that was available. There were however some people curious enough to explore the uncertain and unknown.

Today, although uncertainty still causes anxiety amongst many people (like when the next zombie attack will occur and where the best place to hide is) it seems that it now fuels the hunt for knowledge and discovery. In academia, the focus is on the acquisition of good, sound information. We as students often take for granted that the information given to us is without error. It has recently been revealed to me that some of the information provided to students can be very false.

I was once told that if I ran into a grizzly bear, that I should not climb a tree—instead I should run down a steep slope, because bears can’t run down hills. I was amazed by this fact and told many people about this escape technique; unfortunately, I was later informed by an expert that this is not the case. Unknowingly, I had spread what I thought to be valid information when in fact the escape technique was false. I hope that my gossip will not cause anyone harm in the future!

The uncertainty touched on in both articles (and bear escape techniques) does not refer to this type of uncertainty/curiosity, but I cannot help but see how related these words are. When looking at uncertainty in a study, I am able to see how awareness of error and uncertainty could inspire further research. I think that it is human nature to be curious and to strive for perfection; the knowledge of uncertainties and error push us to do better research in the future.



Visualizing Uncertainty for the Layman

Thursday, March 8th, 2012

I want to respond to the ideas put forth by “SAH” and extrapolated on by “Madskiier_JWong” in relation to visualizing error. I agree with “SAH” that most people don’t think about the error in their geospatial data (although they certainly acknowledge its existence), nor even ask about it either. I also think “Madskiier” is on point when he talks about built-in “structural guidelines” for programs to show people how to build in error in a visual manner. In fact, I think this point is definitely something the authors Gary J. Hunter and Michael F. Goodchild talk a bit about in relation to Y. Bedard’s 1987 work establishing how we might strengthen geodetic control networks and define and standardize technical procedures – all in an effort to cut down error. On the converse side, I agree with more with Giles M. Foody’s perspective on uncertainty for the layman. He does a good job of covering this topic too when he writes that “it is, therefore, most unfortunate that the ease of use of many GI systems enable users with little knowledge or appreciation of uncertainty to derive polished, but flawed, outputs” (116).

But I want to take this discussion a step further and, perhaps, out of the cyber-realm of the practitioner or researcher to think about what error, or uncertainty, might actually mean for the layman who uses the end product of these practices. Hunter and Goodchild talk a bit about this in their article under the heaidng of “Progress in Error Visualization.” Particularly, they talk about adding a layer to a map to convey uncertainty, having given cells display multiple types of classes, or even varying data displays to convey a level of uncertainty(55-56). Yet these practices seem far from common, at least to me, in most of the products that average people rely on during their daily lives. The authors note that their colleagues “would like to take this testing further and establish laboratories in which practitioners would be invited to use spatial databases for real-life problems while under observation” (57). Well, not to be rude, but it’s about time.

I realize I may harp on my GPS a bit too much as part of our class (hey, it’s my only form of LBS), but what happens when the data programmed into it contains errors such as changed street names, landmarks or objects at different grid points, or restaurants that no longer exist? I’ll tell you what happens – I get lost or I go hungry. I realize most people accept and know about the uncertainty inherent in their GPSes navigation – and usually compensate for it with street wisdom. Furthermore, these days most people can simply plug there GPS into a computer program that updates the geographic data on their GPS (and some of these programs even have ways for users to help correct errors). But, in attempting to move us away from the technical side of Hunter and Goodchild, I can’t say I’ve ever seen cells with multiple designations or overlaid error maps on my GPS. Heck, in talking to friends, I was trying to figure out if I’d seen any programs out for common consumption that do this. I mean, wouldn’t it be novel to have my GPS tell me that the restaurant might be at that location give or take a 10 % chance of error?

Not to belabor the point, but I thought one more example might help. In the older days before GPS, I  drove to Boston using Mapquest directions (yes, Mapquest, not the now ubiquitous Google Maps). Unfortunately for me, back then Boston use to always be under some kind of street/highway renovation/construction program. Never, never did my directions get me to where I was going. I almost always got lost and, being a novice driver, very lost. Wouldn’t it have been nice to have been informed about this possible uncertainty in my directions before I left my computer behind at home? Well, my contention is that most of these companies don’t want to admit fault. We instead have to rely on outside Web sites to point out physical, digital and (if you see their post on Brazil) socially-incorrect errors.

I ask you all: Do you know of any programs you use on your computer, phone, GPS, etc. that shows uncertainty/error? How do you think these kinds of companies might be able to work such a feature in? Is it feasible?


Foody and Uncertainty

Thursday, March 8th, 2012

Foody’s article offers a fairly general overview of uncertainty and its impact on GIS analyses. I did not find this an enjoyable read as it seemed to touch on various implications of uncertainty on decision-making, but did not go in-depth on ways to handle uncertainty in a GI system. The chief insight brought out by Foody concerned the paradigmatic shift from absolute accuracy of data to its fitness or appropriateness to the research question.

            Sah mentions the importance of making uncertainty explicit at each step. To me, fuzzy sets and fuzzy logic (things can partially belong to different categories, or a percentage likelihood of being a certain class) seem one of the most intuitive tools to computer users to represent spatial uncertainty. “Stratified partitions” have also been used in other cases to track this uncertainty through different scales1. Additionally, fuzzy sets are most valuable for those transitional zones (which analyses tend to be most interested in for emerging phenomena) where uncertainty is highest. Despite these kinds of parametric solutions however, there remain ontological issues of deciding what categories are included in the fuzzy set. Given the increasing amount of data available through data mining, uncertainty needs to be handled in a more robust way that is more interoperable than fuzzy sets with predefined classes. Finally, to naïve users who aren’t familiar with the bits, bytes, and algorithms behind the scenes that drive classification and visualization on a GIS, is fuzzy logic an intuitive way of representing uncertainty?

–          Madskiier_JWong

1. Beaubouef, T., Petry, F. (2010) “Fuzzy and Rough Set Approaches for Uncertainty in Spatial Data”

Reviewing Geospatial Database Uncertainty with New Technologies

Wednesday, March 7th, 2012

The paper published by Hunter et al. in 1991 had categorized the challenges of uncertainty in Geographic Information Science (GIS) study into 3 types: definition, communication, and management. Authors begun with the presentation of error visualization, and discussed the uncertainty in visualization. Then, the approaches for error management in geospatial databases were pointed out, and the future research directions as well.

Authors also mentioned it was very helpful to look at the system logs for uncertainty determination. It might be a good solution at the time this paper was published, but currently system log analysis becomes a great challenge in pattern recognition study due to their high dimension. So the new question becomes: What is the tolerance for the uncertainty in system log analysis?

Uncertainty reduction and absorption were proposed as two solutions for error management. Authors utilized several good examples to demonstrate these two solutions. But with new challenges in GIS research (e.g., data intensity), these two solutions should be changed accordingly.

In their paper, authors also mentioned that the main reason that caused the poor utilization was the lack of confidence of the system, due to the fact that user cannot obtain enough information about the quality of the databases and the unacceptable errors. This might be true in the past, but nowadays Bayesian Network technique is utilized to handle this problem, as Lackey et al. presented in (Laskey, K.B., Wright, E.J. & da Costa, P.C.G., 2010. Envisioning uncertainty in geospatial information. International journal of approximate reasoning, 51(2), pp.209–223.)


Uncertainty and Decision-Making

Wednesday, March 7th, 2012

I really like the article by Hunter and Goodchild (1995) because it thoroughly addressed the different aspects of uncertainty in GIS. Further, the various diagrams were effective in conveying and summarizing the authors’ main points. I especially appreciate the discussion concerning the Variation in Impact of Spatial Databases upon Decision-Making. The authors argue that in order to assess how much impact uncertainty will affect the decisions-making process, one must consider the type of decisions to be made. Hunter and Goodchild conclude that uncertainty should be most critically assessed when decisions “carry political, high-risk, controversial or global implications” (58). Although this seem obvious, it is important to remember the consequences due to inability to handle uncertainty is often confounded by persuasive discourse in the aforementioned types of decisions. Because there is usually high stakes attached to these sorts of decisions, politician or advocates are tempted to play any uncertainty within the product in a way that will support their position. Foody (2003) also recognizes this danger and warns, “emotionally colored language is used in relation to issues associated with a large degree of risk” (114). Furthermore, I would be interested to know more about how and to what degree does uncertainty impact decisions. Is uncertainty the most influential when presented early on in the decision-making process or near the end of it? Finally, does the depiction of uncertainty actually lead actors to make better decisions? Under what circumstances does it lead to worse decision?

There are a few things in the article that I still need clarifying. For example, Figure 4 puzzles me — why does knowledge about uncertainty decrease when a person transitions from the Learning Phase into the Knowledgeable Phase? Also, with respect to uncertainty absorption, the author claim data producers are “preparing data quality statements from which users may make their own evaluation of the suitability of data for their purpose”. Is this just the metadata of how the data was collected? Does this force the decision of uncertainty absorption onto the users? The way in which producers vendors can absorb uncertainty is still vague to me.



More to Uncertainty

Tuesday, March 6th, 2012

A point I mentioned in the previous post on uncertainty mentioned the interest I had in Foody’s point on zoning.  In his article entitled “Uncertainty, knowledge discovery and data mining in GIS”, he discusses how uncertainty is inherent in geographical data, and furthermore, suggests it is compounded by the way we use our data.  He says, “[c]hanges in administrative boundaries, for example, significantly handicap population studies, with incompatible zoning systems used over time”.  I said in my last point that it appears uncertainty is another challenge that may be mediated by making assumptions explicit.  I would like to examine this point further.

In saying that explicitness can mediate this problem, I don’t assume that merely laying out steps can “fix” error and uncertainty, but rather, by understanding, accepting, and most importantly, working with these issues at each step is incredibly important.  Similarly to the issue of scale, uncertainty is something that (while not “applied”, as scale is) at least occurs at almost each stage of working with data—observation and analysis.  This can be problematic because of the possibility to compound “layers” of uncertainty at various stages, resulting in even greater amounts of uncertainty by the end of the project.  So I believe by recognizing uncertainty at each scale, it becomes much easier to work with.  And as Foody notes, GIS can be thought of as “a means of communicating information”, which can extend to a means of communicating error.  However, I also believe that by understanding and communicating uncertainty and error, it can lead to better formulated and deeper questions, which may make use of error and uncertainty in data and factor that into the question itself, similarly to how errors in zoning with regards to scale can be used to illuminate beyond the surface of the data.


Becoming Comfortable with Uncertainty

Monday, March 5th, 2012

In Hunter and Goodchild’s essay entitled “Managing Uncertainty in Spatial Databases”, one statement in particular regarding uncertainty and error in data really hit home: the idea that people don’t understand the error in their data—and I would add, don’t ask about it, either.  This also recalled for me something Ana presented in her talk about LBS—that people don’t always understand or appreciate the data and technologies they are working with today—even perhaps the experts who know the complexities of data and technology that generate information we work with, and potentially take for granted, today.

So for me, amidst all their discussion of identifying, working with, and explaining/understanding error in data, their goal for future research stood out.  “Future error research cannot stay confined to the academic sector and should be conducted jointly with the user community to reflect the need for solving management aspects of the issue”.  The emphasis they place on being able to manage error is also incredibly important, and should also be at the forefront of this integration of research into the user domain, particularly as the user domain does not necessarily mean experts working in the field, but can be any layperson with access to the Internet.

This argument is particularly important as they express in their semi-ironic Figure 1, demonstrating how the experts are the ones who generally know how to deal with data, and what to ask, but don’t need to ask, whereas the layperson does not know how to deal or what to ask, but unfortunately is the one in need of those answers.  When reading Foody’s take on uncertainty, it further highlighted why lack of understanding in users can be negative, outside of creating questionable results—people may choose not to act at all with information they are uncertain about.  As Foody mentions with the global methane sink, without an understanding of how something works, it leads to uncertainty in models and results, and no action being taken.  This also seems a largely important point when arguing for the increased understanding of not only what uncertainties are present, but also what we can do with uncertainty in data.

This particularly reminds me of another class I am taking, in which we are discussing uncertainty in the media with regards to climate change.  Media sources depict scientific knowledge and models to be wrought with uncertainty—and as the public, policy makers, and other “end-users” don’t understand how scientists work with uncertainty in data and models, they are likely to be unreceptive of results and recommendations.

Of particular interest to me, Foody also discusses uncertainty inherent in geographical data, and the issue of zoning and the placement of administrative boundaries that can influence analysis on population data—it appears with uncertainty it is just another problem where assumptions must be made explicit.


GIScience and uncertainty

Monday, January 23rd, 2012

The article was thought provoking, addressing numerous accomplishments, research agendas and challenges. I appreciated the author’s self awareness and frank statements when addressing his own limitations. At times, it was overwhelming as there were a lot of points covered with 20 years of theoretical and empirical accounts of GIScience.

From the challenges mentioned, I found the notion of uncertainty intriguing; a concept that is highly influential yet largely ignored. Goodchild’s conceptual framework for GIScience elucidates how the human, society and the computer are interlinked by many variables (e.g. spatial cognition, public participation GIS, data modelling). “Uncertainty” dominates the middle of the triangle, however 3 out of the 19 papers — that were most cited, and considered classics over the last 20 years — analyzed by Fisher “report work on uncertainty” (9).

The article notes Tobler’s Law and its implication that “relative errors over short distances will almost always be less than absolute errors” (12). According to Goodchild, this has significant implications for the modeling of uncertainty. From this, it can be inferred that we have confidence in addressing an issue due to its proximity, where a relative error is less intimidating than an absolute error. Goodchild further notes the transition made in our thinking about GIScience from “the accurate processing of maps to the entire process of representing and characterizing the geographic world” (11). The emphasis on the GIScience thought process has been shifted away from accuracy on a micro geographic scale in relation to maps, towards a characteristic and representation on a macro, global geographic scale. Moving from a micro to a macro scale will entail more uncertainty, while the aim is to increase accuracy these are contrary in nature.

Despite uncertainty seen as an obstacle to GIScience progress, Goodchild takes note of it as also being a salient factor in “potential undiscoveries” (6). The process of government’s adoption and application of GIScience, and further work on third, fourth and fifth dimensions, and the role of the citizen through neo-geography and VGI are all very exciting and revolutionary.

Goodchild. (2010). Twenty years of progress: GIScience in 2010.