Archive for April, 2013

Remote sensing uncertainty in GIS

Friday, April 5th, 2013

The article of G. G. Wilkinson is dated, and this is significant in a field that is rapidly evolving. Nonetheless, in my point of view, the author’s argument is still valid today. He talks about uncertainty and data structures in remote sensing and GIS. Sophisticated technologies and remote sensing don’t automatically solve the problem of delimitating boundaries. Even with technology development, classification is still a complex task. It is like trying to create boundaries where the world is actually maybe more like a continuous landscape. We are trying to define distinctive class of land cover or topographic zones for example, but in reality is there a frontier between different types of land? It partially explains why uncertainty is attach to any kind of techniques in remote sensing. Taking the limits of remote sensing techniques into account, the author evaluate different procedure and use of data structure. He thus suggests that part of the further development is to identifying the best techniques and technology development that will allow the best representation of the phenomenon that is intended to be represented by the remote sensing data. Although the problems of errors and uncertainty are unlikely to be solved easily even with technical development in data structures or with visualization techniques such as 3d environment and virtual reality.

S_Ram

Certainty of Uncertainty!

Thursday, April 4th, 2013

Helen Couclelis wrote an article called Certainty of Uncertainty and I think that David J. Unwin is making a similar point. The problem of uncertainty is not merely technical. Uncertainty doesn’t only come from data and information but it is also about geographical knowledge that is sometimes inevitably uncertain. There are things that we simply can’t know. The literature focus on finding technical solutions, but the author explains that “at the heart of all the contributions is a concern for exactly how we can usefully represent our geographic knowledge in the primitive world of the digital computer”.

As mentioned in previous discussion about ontology, we conceptualize the world as field or object based which correspond to raster or vector in GIS. The author shows that both representation comes with specific uncertainties. Furthermore, we discussed how delimitating boundaries is often a difficult task and uncertainty is inevitable. The conclusion is bringing us back to the first discussion in class about GIS as a tool or as science and the determinism of the technology. The author suggest that rethinking the way we use the technology and the way we structure problems and databases is essential to achieve sensitivity in GIS. It is about adapting the technology to represent knowledge in a way that would take into consideration our conceptualization of the world and not merely relying on GIS technology to calculate the world for us.

Couclelis, H. (2003). The Certainty of Uncertainty: GIS and the Limits of Geographic Knowledge. Transactions in GIS, 7(2), 165-175.

S_Ram

Uncertainty

Thursday, April 4th, 2013

Uncertainty lies at the core of GISci where MacEachren et al. acknowledges the GISci community has given more attention to formalizing approaches to uncertainty than in other communities such as information visualization communities (p. 144). The authors go through several examples of how uncertainty can be visualized from changes in hue to symbols with different transparencies to depict where uncertain data may exist. What peaked my interest was the interactive visualization techniques that users can control depictions of uncertainty. Instead of permanently adding a layer of complexity that can obstruct and confuse the readers from what the data is trying to depict, the user is in full control of how much or little information (with regards to uncertainty) is available to them. To me this seems like a better solution than to simply find a single “ideal” ways to represent uncertainty visually in a static manner – especially since every individual will have their own preferences on what they think “best” means (context matters!). What I don’t quite agree with is the authors’ assertion that humans are not adept to using statistical information to make decisions and base on heuristics (based on a study in 1974). Since the quantitative revolution, hasn’t statistics been bought to the forefront of geography such that we may rely on statistics too much at this point? That being said, visualizing uncertainty can take on many forms, from charts, changes in opacity, 3D graphics where the way in which uncertainty should be viewed will ultimately be context specific to meet the goals of the researcher.

-tranv

Integrating RS and GIS

Thursday, April 4th, 2013

Brivio et al. provides a case study where the integration of GIS and RS is able to compensate for limitations that may exist in each technology. The study provides a good example of how these two closely related fields can combine together to produce a more realistic representation of various phenomenon. While this case study specifically used additional GIS data as a supplementary component to improve on the RS classification of flooded areas, RS data can similarly be used to as a tool to produce GIS data (ex. land cover classification dataset derived from remote sensing data). However while there are many advantages in integrating the two, several issues come to mind. RS data is pixel based, while spatial data can be vector or raster based. To have to convert one to the other in order to do analysis would compound issues of accuracy and uncertainty. We know RS is already well acquainted with their own issues related to scale, noise and technological limitations, but these issues can quickly get amplified, and I can imagine that recognizing these sources of uncertainty will be difficult once the data thoroughly entangled in one another.  Also, what kind of data models is required for this integration? Spatial data is generally represented in 2D, while RS hyperspectral cubes are in several dimensions.  For the researcher whose interested in integrated such technologies, they have to be well versed in the inherent issues that each type of data presents to provide a comprehensive analysis – definitely no small feat.

-tranv

Visualizing uncertainty

Thursday, April 4th, 2013

MacEachren et als article provides a thorough overview of the current status of uncertainty visualization along with its future and its challenges. It seems to be established that uncertainty visualization is more useful at the planning stage than at the user stage of an application. This makes me think back to an earlier discussion on temporal GIS. We talked about how the important aspect of temporal GIS was in its analytical capabilities, rather than in its representational capabilities. While I do not deny the positive effect on analysis that visualization might have, I question if it should be the aspect of uncertainty that is given the most attention.

Two of the challenges proposed by the article are developing tools to interact with depictions of uncertainty and handling multiple kinds of coexisting uncertainty. Might representation in some instances prove more troublesome than its worth? Might representational practices at times be obfuscative of data that might be understood as just data? I want to note that I am asking these questions in earnest, not rhetorically. Which I guess boils down to a question I have probably asked all semsester: how do we evaluate what is important enough or useful enough to invest time in?

Wyatt

Are We Certain that Uncertainty is the Problem?

Thursday, April 4th, 2013

Unwin‘s 1995 paper on uncertainty in GIS was a solid overview of some of the issues with data representation that might fly under the radar or be assumed without further comment in day-to-day analysis.  He discussed vector (or object) and raster (or field) data representations, and the underlying error inherent in the formats themselves, rather than the data, per se.

While the paper itself is clear and fairly thorough, I can’t help but question whether error and uncertainty are worth fretting over. Of course there is error, and there will always be error in a digital representation of a real-world phenomenon. Those people, such as scientists and policy makers, who rely on GIS outputs, are not oblivious to these representation flaws. For instance, raster data is constrained by resolution. It is foolhardy to assume that the land cover in every inch of a 30-meter grid cell is exactly uniform. It is also wrong to suggest that some highly mobile data (like a flu outbreak) would remain stationary over the course of the interval between sensing/mapping. There are ways around this, such as spatial and temporal interpolation algorithms and other spatial statistics, and I feel like estimates are often sufficient. If they aren’t, then perhaps the problem isn’t with the GIS, but rather in the data collection. Better data collection techniques, perhaps involving more remote sensing (physical geography) or closer fieldwork (social geography) would go far in lessening error and uncertainty.

With all of that said, I am not about to suggest that GIS is perfect. There is always room for growth and improvement. But, after all, the ultimate purpose of visualizing data is for understanding and gaining a mental picture of what is happening in the real world. An error-free or completely “certain” data representation is not only impossible within human limitations, but it is not particular necessary.

– JMonterey

Thursday, April 4th, 2013

No matter how good technology becomes, we will always face challenges in data uncertainty and error; the question is, can we develop appropriate techniques to mitigate the effects of these noises, and come away with the correct signal. As MacEachren et al. (2005) point out in their article titled “Visualizing Geospatial Information Uncertainty”, we use this information to base decisions off of, and the uncertainty is inherent in the data and must be taken into account.

There are multiple dimensions of uncertainty, as the authors point out, ranging from credibility of a source to precision of a physical variable, and these will compound, effecting the amount of correctness the end result will have. They function across many scales, including the direct attribute of the information, the specific context or location of the information (which may not be what you want to apply the information to), as well as temporally. It all seems very complicated when examined through this framework… but it is important to take these into account in order to have confidence in your product.

 

Personally, i have experienced a lot of uncertainty while trying to create a global map of administrative subdivisions. Every County collects data at different resolutions and time, however these countries are supposed to be contiguous as we well know. The borders do not always align, but who is right? Furthermore, this issue is compounded when you consider the global land mass as a whole. We want to have an accurate total area of land surface, however if you trust each country to represent their land correctly and then end up with an incorrect total, who is wrong? Where do you remove land? Where do you add it? These are some of the challenges I have faced with uncertainty, and I was not qualified to make the adequate decision.

 

What I didn’t do at the time was try to quantify and visualize the uncertainty, which as the authors say, is  crucial to making sure the data is useable, and that you are confident it is correct for answering the questions you are trying to answer.

 

Pointy McPolygon

What’s the hard part now?

Thursday, April 4th, 2013

Remote Sensing and GIS technology has changed significantly since Wilkinson (2007) wrote his review on how the two fields overlap. Hyperspectral imagery is now commonplace, and the software is well equipped to deal with it. Currently, we still struggle with handling error and uncertainty, but there are prescribed ways for dealing with each issue. Atmospheric conditions, topography, angle, sensor, and georeferencing are now done to eliminate some of the error caused through data collections. Things like fuzzy logic help to deal with uncertainty, although it remains an issue. As data collection techniques further improve, our ability to deal with this uncertainty will become less and less important.

Most of the current issues still lie in data models. The complementary nature of GIS and Remote Sensing is evident, however these two technologies speak different languages in situations where we expect them to communicate and enforce their complimentary relationship. This becomes even more difficult when we try to represent more complex relationships that are no longer 2-dimensional with hierarchical classifications. Personally, I find that the 2 commercial softwares for each technology interact quite well when performing simple tasks, like making a supervised classification and turning that into a GIS layer. However, when the data becomes more complex, and the classifications with them, the ability of the softwares to communicate with each other becomes increasingly bad.

 

Pointy McPolygon

Problems of classification

Thursday, April 4th, 2013

Since the paper by Wilkinson in 1996 many satellites have been put into orbits and several million GBs of satellite image have been collected. But more importantly, with the coming of the digital camera there has been an explosion in the amount of digital images that have been captured. Consequently, people were quick to spot the opportunity in leveraging the data from the images; hence a lot of research has been conducted in the image processing domain (mainly in biometrics and security). This being said, some of the most successful approaches in other domains have not been as well, when applied to satellite images. And the  challenges outlined in the paper still hold true today.

According to my understanding this is mainly because of the great diversity in satellite images. The resolution is only one part of the equation. The main problem lies in the diversity of the things being imaged. This makes it very difficult to come up with training samples that are a good fit. Thus, traditional Machine Learning techniques based on supervised learning have a hard time. Moreover, the problem is compounded by the fact that when we are classifying satellite images, we are generally interested in extracting not one, but several classes simultaneously with great accuracy. However, the algorithms do perform well when classification is performed one image at a time but significant human involvement is needed to select good training samples for each image. But to the best of my knowledge no technique exists which can completely automatically classify satellite images.

-Dipto Sarkar

GIS&RS

Thursday, April 4th, 2013

Brivio et als paper presents a case study integrating Remote Sensing and GIS to produce a flood map. After explaining methodology and results of other methods, the paper finds the integrative method to be 96% accurate.

This speaks to the value of interdisciplinary work. While RS applications on their own proved inadequate, a mixing of disciplines gave a fairly trustworthy result. While I understand the value of highly specialized knowledge, having a baseline of capability outside of one’s specific field is useful. I remember in 407 Korbin explaining that knowing even a bit of programming can help you in working with programmers, as understanding the way that one builds statements as well as the general limits of a given programming language will give you an idea of what you are can ask for. The same is true for GIS/RS. Knowing how GIS works and what it might be able to do is useful for RS scholars in seeking help and collaboration and vice versa. I think McGill’s GIS program is good in this respect. I got to dip my toes into a lot of different aspects of GIS (including COMP) and figure out what I like about it. If I end up working with GIS after I graduate, I know that the interdisciplinary nature of the program will prove useful.
Wyatt

Time or Space

Thursday, April 4th, 2013

Geospatial analysis can be no better than the original inputs, much like a computer is only as smart as its user. In the field of remote sensing, this ideology may be on its way to becoming obsolete. Brivio et. al show from a case study of catastrophic inundation in Italy that they can compensate for the temporal disparity in the capturing of remotely sensed data and the peak point of the flood, a few days before.

The analysis, however, was not completed with the sole use of synthetic aperture radar images. Had it not been for the integration of topological data, it is unlikely that one would be able to obtain similarly successful results.

With any data input, temporal or spatial resolution are limiting factors. Brivio highlights this by acknowledging the use of NOAA thermal infrared sensors, which have a finer temporal resolution, while lacking in spatial resolution. Conversely, the SAR images used in the case study analysis have a relatively higher spatial resolution, but produces longer temporal intervals.

Given Brivio et. al’s successful prediction of flooding extent, it may mean that, if need be, it is advantageous to choose an input with a finer spatial resolution in exchange for a coarser temporal resolution, complementing the temporal delay with additional inputs to compensate.

Break remote sensing down into it’s two main functions: collection and output. One will inevitably lag behind the other, but eventually the leader will be surpassed by the follower. Only for it to happen again some time down the road. Much like two racers attached by a rubber band.

What all of this means for GIS; eventually the output from remote sensing application will surpass the computing power of geographic information systems. At which point, the third racer, processing, will become relevant, if he isn’t already.

GIS and RS: how do we account for variability?

Wednesday, April 3rd, 2013

Brivio et al.’s article “Integration of remote sensing data and GIS… for mapping of flooded areas” presents the very common process of using RS data and GIS  to map flooding and flood plains. Although the article presents how the integration of RS and GIS can accurately map a flood with a concluded method  accuracy of 96%, it only looks at a single event and study site. From my experience, this is not always the case, as  integration methods, even if they are the same, often vary in accuracy from one location to another. Furthermore, event duration, intensity and geologic substates often interfere with flood area prediction from RS data and GIS, as variations can modify water location within minutes to hours. To clarify, one area may be flooded at certain points during the flood period while during other periods dry (i.e. it may transition from wet to dry to wet), which interferes with accuracy of the RS data and GIS prediction. Fundamentally, water changes how the surrounding environment reacts, modifying where floods are. As floods react to the environment, often areas become flooded for only minutes and as such, are never recognized as a flooded area, in both GIS predictions and RS data, as well as human reports (although they were flooded; but only for minutes).

To better predict flood area, TWIs (topographical wetness index) and DEMs (digital elevation models) when compared to flow paths (cost-distance matrix), may in fact, better predict flooded areas when used in conjunction with RS data then just the integration of RS data to cost-distance matrixes. In addition, more data sets and studies would further help to create a more general integration protocol and predictive area estimates for floods. To elaborate, the techniques in the article work well on the study area by may not work on other floods, therefore by adding more data from more types of floods, the technique could be adapted to other situations. The result of multiple integrations with multiple data sets would also reduce error and produce greater accuracy. The “Big” question, however that will still remain unanswered from this article is: how can we account for ecosystem and flood variability within GIS and RS data sets?

C_N_Cycles

Visualizing Uncertainty: mis-addressed?

Wednesday, April 3rd, 2013

“Visualizing Geospatial Information Uncertainty…” by  MacEachren et al. presents a good overall view of geospatial information uncertainty and how to visualize it. However that said, many parts seemed to convey that all uncertainty must be defined in order to make correct decisions. In my realm of study, although it would be nice to eliminate or place uncertainty in a category, just the recognition that there is uncertainty is often definition enough to make informed decisions based on the observed trends. Furthermore, the authors seem to separate the different aspects of the environment or factors that lead to uncertainty and how it may be visualized. The use of many definitions and descriptions convolutes what is really the factors that result in uncertainty and the resulting issues with visualization. The way visualized uncertainty is presented greatly contrasts the ambiguity of the definitions behind uncertainty and its representation presented by the authors. The studies and ways uncertainty can be visualized is a great help in decision making and the recognition of further uncertainties.

One aspect that would have help in addressing uncertainty and its visualization would have been to integrate ideas and knowledge from the new emerging field of ecological stoichiometry, which looks at uncertainty, the flow of nutrients and energy, and the balance within ecosystems to answer and depict uncertainty. I believe that ecological stoichiometry would address many of the challenges in identification, representation and translation of uncertainty within GIS and help to clarify many problems. This stoichiometric approach falls along the scheme of the multi-disciplinary approach to uncertainty visualization described within the article.  However, as the article is limited to more generally understood approaches, rather than more complex ones, such as stoichiometry, do some of the proposed challenges in recognition and visualization of uncertainty not exist?  I would argue yes, but then again more challenges may arise in depiction, understanding and translation of uncertainty.

C_N_Cycles

Error prone GIS

Monday, April 1st, 2013

In any data related field great efforts are put into ensuring the quality and integrity of the data being used. It has long been recognized that the quality of results can only be as good as the data itself, moreover, the quality of data is no better than the worst apple in the lot. Hence, for any data intensive field great efforts are put into data pre-processing to understand and improve the quality of the data. GIS is no exception when it comes to being cautious about the data.

The various kinds of data being handled in GIS makes the problem of errors more profound. Not only does GIS work with vector and raster data, it also needs to handle data in forms of tables. Moreover, the way the data is procured and converted is also a concern. Many a times data is obtained from external sources in the form of tables of incidences that have some filed(s) containing the location of the event. Usually this data was not collected with the specific purpose of being analysed for spatial patterns, hence, the location accuracy of the events are greatly varied. Thus, when these files are converted into shapefiles, it inherits the inaccuracy inbuilt in the data-set.

One of the things to remember however is, that the aim of GIS is to abstract reality to a form which can be understood and analysed efficiently. Thus it is important not to lay too much emphasis on how accurately the data fits the real world. The emphasis on the other hand should be to find out the level of abstraction that is ideal for the application scenario and then understand the errors that can be accepted at that level of abstraction.

-Dipto Sarkar