Archive for the ‘geographic information systems’ Category

The Precautionary Principle

Friday, March 9th, 2012

In discussing uncertainty particularly with respect to climate change and methane emissions as Foody does, the precautionary principle was not mentioned in the article. I find this rather curious as it is an important concept existing largely because of uncertainty on what impact will be had by an action. Having discussed the precautionary recently in another class, Analyzing Sustainability, it is also key in relating to Hunter and Goodchild with respect to decision making with recognition of uncertainty and would have been good to see a little more information on.

In relation to sustainability issues, this principle comes up often as one school of thought. Because uncertainty exists and consequences of a decision could potentially be detrimental, an approach that would either not be detrimental in outcome or would be estimated to be less detrimental should be followed. We discussed fisheries in particular as fish populations are unknown and only catch data is available leaving biologists guessing the actual population and needing to make recommendations on fishing policy and allowable catch based largely on their estimates of how viable, large, health etc. the population actually is.

-Outdoor Addict

Scientific advising

Friday, March 9th, 2012

I don’t know why, but I didn’t feel like the Foody article was that useful to me. Of course, it was labelled as a progress report, but had little depth and little to add.

The hunter article also glosses over some details that would have been appreciated. The idea of ‘meta-uncertainty’ was very interesting though, and I think is one of the main reasons we need experts and education in geography and uncertainty. The development of methods to determine error is very important for future projects. However, the communication of error is the other key problem, especially when submitting results/conclusions to political bodies. This is the very reason that people in government appointment advisors from the scientific community to help interpret findings in the scientific community. I don’t, however, think that developing a good way of communicating and displaying uncertainty is going to do away with scientific advisors. There must be more to knowing about uncertainty than  just looking at a table or graph like those examples below. I think that part of truly being able to understand error in results/conclusions is knowing about the processes/methods that have been done to get to the result. How can we measure potential loss of accuracy or other problems coming from a workflow? I don’t think that is going to be fully possible with just a graph or table. Decision makers still need to know, or be communicated, the processes that have been gone through.
The kind of systems mentioned in the papers seem to be geared towards researchers. This still doesn’t necessarily mean that they will be translatable to ‘plain language’ for decision makers (the visualisations being developed here http://slvg.soe.ucsc.edu/unvis.html certainly seem complicated to understand, especially with multidimensional data), and so, perhaps another set of systems/visualisations need to be developed. Maybe we will need a certain set of visualisations and metrics for understanding, and another set for communicating results.
Finally, would making decisions necessarily hinge upon “this conclusion is more accurate, so we’ll choose this one”? Would a decision maker, presented with a certain finding that was deemed to have the least errors always choose to support that conclusion?
Either way, having a better way of understanding and communicating error and uncertainty will surely be beneficial to the scientific community, by making conclusions a little more transparent and understandable to the public.
-?

Transparency is convincing

Thursday, March 8th, 2012

Climate NYC’s post raises some good points. The skeptics who question climate change are probably in the minority in today’s trendy world (although perhaps that is just my opinion because I’m a geography and environment major). Although these opinions are for the most part unhelpful exaggerations, the use and necessity of the skeptic is undeniable. Without the skeptics politicians and others would be able to convince the public of anything. Al Gore exaggerates in the film An Inconvenient Truth and it hurts his reputation. Skeptics call him out on this and bring his study back down to earth.

The initial predictions that the IPCC reported were not very accurate. Their climate models did not do a successful job at predicting the impacts and rise in CO2; in fact the IPCC’s climate models underestimated these variables. That being said, the public does not want vague estimates. The public wants precise numbers. The IPCC therefore feels compelled to provide the public with quantitative information that may potentially break bad habits.

More recently, as shown by Climate NYC, the IPCC offers multiple climate change scenarios. Not only do they provide the graphs shown below, but they also show future estimates regarding temperature rise using different levels of CO2 reduction. These methods of visualizing error and uncertainty have a positive impact on the public. The transparency of these reports counteract the skeptics’ critiques benefit the research of the IPCC more than if they chose to exaggerate as a scare tactic.

Although perhaps not necessary in all academic reports, it would be interesting to see more visualization of error and uncertainty. Granted, some error and uncertainty cannot be quantified, rather more qualitatively listed, but it gives me the impression of a more honest report.

Andrew

 

Climate Models, Uncertainty and Unmade Policy Decisions

Thursday, March 8th, 2012

I think we’d be remiss to cover the topic of uncertainty without thinking about the role that it plays when scientific research or other forms of data are transmitted from the academic or research realms into the public world of policy debates. As Giles M. Foody notes, problems with uncertainty “can lead to a resistance against calls for changes because of the uncertainties involved” (114). I think he’s right but this is a vast understatement.

In the climate change debate now swirling through most of the globe, uncertainty could be described as one of the main factors propelling so-called climate skeptics, naysayers and those generally unwilling to acknowledge that human energy consumption might be influencing the global climate. Just take a look at this skeptic’s blog post. He names uncertainty in climate science as the number one reason not to move too fast on this global, vexing issue. In fact, much of the opportunity for debate on the issue stems from varying points of view on just how certain the hundreds and thousands of climate models out there might be in predicting a warming world. In fact, just try Googling “climate” and “uncertainty” and you’ll find an avalanche of information – some more scientific than others.

Foody does a nice job of summarizing this paradigm when he writes about how “end-users and decision-makers often fail to appreciate uncertainty fully” (115). I couldn’t agree more. What most climate scientists will tell you is that while their models contain a great deal of uncertainty – which varies depending on what type of model your discussing or how it’s been parameterized – the overall trends are pretty clear. Most of the work done in this field concludes that a relationship does, in fact, exist between CO2 emissions and a warming global climate. Yet the importance of uncertainty, here, lies not within the scientific community but with publicly debated policy decisions where uncertainty/error can conveniently become a political football. I mean just look at some of the variation in predictions from climate models in the IPCC’s 2001 report:

Figure 1. A selection of climate models and their prediction of globally averaged surface air temperature change in response to emissions scenario A2 of IPCC Special Report on Emission Scenarios. CO2 is ~doubled present concentrations by year 2100. Figure reproduced from Cubasch et al. (2001).

Yes, there’s some definite variation between models, a degree of uncertainty. But how does this compare with the idea we discussed in class about scale. Can we ever expect to have complete accordance and certainty amongst climate models when the issue operates on such a vast, global scale? Should we expect it on smaller, regional scales with something as complex as the atmosphere’s inputs and ouputs and the sun’s radiation?

–ClimateNYC

The hunt for knowledge and bear Foody

Thursday, March 8th, 2012

Foody mentions uncertainty with knowledge and discovery. I like how, although seemingly opposite, these elements work together. Before science and empirical reasoning became the norm (I’m talking wayyy back), uncertainty was often explained through legends, ghosts or religious anecdotes. These were ways to calm people. Sea monsters used to live far out to sea and if you sailed too far you would fall of the earth. Without the resources available to man (and women), these explanations were all that was available. There were however some people curious enough to explore the uncertain and unknown.

Today, although uncertainty still causes anxiety amongst many people (like when the next zombie attack will occur and where the best place to hide is) it seems that it now fuels the hunt for knowledge and discovery. In academia, the focus is on the acquisition of good, sound information. We as students often take for granted that the information given to us is without error. It has recently been revealed to me that some of the information provided to students can be very false.

I was once told that if I ran into a grizzly bear, that I should not climb a tree—instead I should run down a steep slope, because bears can’t run down hills. I was amazed by this fact and told many people about this escape technique; unfortunately, I was later informed by an expert that this is not the case. Unknowingly, I had spread what I thought to be valid information when in fact the escape technique was false. I hope that my gossip will not cause anyone harm in the future!

The uncertainty touched on in both articles (and bear escape techniques) does not refer to this type of uncertainty/curiosity, but I cannot help but see how related these words are. When looking at uncertainty in a study, I am able to see how awareness of error and uncertainty could inspire further research. I think that it is human nature to be curious and to strive for perfection; the knowledge of uncertainties and error push us to do better research in the future.

Andrew

 

Visualizing Uncertainty for the Layman

Thursday, March 8th, 2012

I want to respond to the ideas put forth by “SAH” and extrapolated on by “Madskiier_JWong” in relation to visualizing error. I agree with “SAH” that most people don’t think about the error in their geospatial data (although they certainly acknowledge its existence), nor even ask about it either. I also think “Madskiier” is on point when he talks about built-in “structural guidelines” for programs to show people how to build in error in a visual manner. In fact, I think this point is definitely something the authors Gary J. Hunter and Michael F. Goodchild talk a bit about in relation to Y. Bedard’s 1987 work establishing how we might strengthen geodetic control networks and define and standardize technical procedures – all in an effort to cut down error. On the converse side, I agree with more with Giles M. Foody’s perspective on uncertainty for the layman. He does a good job of covering this topic too when he writes that “it is, therefore, most unfortunate that the ease of use of many GI systems enable users with little knowledge or appreciation of uncertainty to derive polished, but flawed, outputs” (116).

But I want to take this discussion a step further and, perhaps, out of the cyber-realm of the practitioner or researcher to think about what error, or uncertainty, might actually mean for the layman who uses the end product of these practices. Hunter and Goodchild talk a bit about this in their article under the heaidng of “Progress in Error Visualization.” Particularly, they talk about adding a layer to a map to convey uncertainty, having given cells display multiple types of classes, or even varying data displays to convey a level of uncertainty(55-56). Yet these practices seem far from common, at least to me, in most of the products that average people rely on during their daily lives. The authors note that their colleagues “would like to take this testing further and establish laboratories in which practitioners would be invited to use spatial databases for real-life problems while under observation” (57). Well, not to be rude, but it’s about time.

I realize I may harp on my GPS a bit too much as part of our class (hey, it’s my only form of LBS), but what happens when the data programmed into it contains errors such as changed street names, landmarks or objects at different grid points, or restaurants that no longer exist? I’ll tell you what happens – I get lost or I go hungry. I realize most people accept and know about the uncertainty inherent in their GPSes navigation – and usually compensate for it with street wisdom. Furthermore, these days most people can simply plug there GPS into a computer program that updates the geographic data on their GPS (and some of these programs even have ways for users to help correct errors). But, in attempting to move us away from the technical side of Hunter and Goodchild, I can’t say I’ve ever seen cells with multiple designations or overlaid error maps on my GPS. Heck, in talking to friends, I was trying to figure out if I’d seen any programs out for common consumption that do this. I mean, wouldn’t it be novel to have my GPS tell me that the restaurant might be at that location give or take a 10 % chance of error?

Not to belabor the point, but I thought one more example might help. In the older days before GPS, I  drove to Boston using Mapquest directions (yes, Mapquest, not the now ubiquitous Google Maps). Unfortunately for me, back then Boston use to always be under some kind of street/highway renovation/construction program. Never, never did my directions get me to where I was going. I almost always got lost and, being a novice driver, very lost. Wouldn’t it have been nice to have been informed about this possible uncertainty in my directions before I left my computer behind at home? Well, my contention is that most of these companies don’t want to admit fault. We instead have to rely on outside Web sites to point out physical, digital and (if you see their post on Brazil) socially-incorrect errors.

I ask you all: Do you know of any programs you use on your computer, phone, GPS, etc. that shows uncertainty/error? How do you think these kinds of companies might be able to work such a feature in? Is it feasible?

–ClimateNYC

Hunter and Goodchild and Visualizing Uncertainty

Thursday, March 8th, 2012

Hunter and Goodchild explore current and future ways of visualizing error in GIS. An important distinction is made in the varied audience of these visualization tools, and how different representations should be used to target GIS novices and experts.

            I am a big fan of structural guidelines built into databases to encourage or discourage certain practices. These prompts are fairly unobtrusive and stem errors at the data collection stage, pre-empting any wasted time on faulty analysis. The acknowledgement of variability of data from different data sources is increasingly important in an Internet filled with geospatial data.

The image below shows uncertainty in contour lines by the size of gaps in a contour; larger gaps represent greater uncertainty. To me, this visualization provokes deeper reflection on the limitations of the vector data model in representing a continuous variable such as elevation. However, this thought process is unlikely to be the case for GIS beginners. The eye is still capable of recognizing the general pattern and connecting the dots. Furthermore, it is confusing whether the gaps represent individual spots where it was impossible to obtain elevation data, or whether the entire contour line was measured with low resolution/precision.   

Probability surfaces suggested by the authors are more concrete representations of the various possible outcomes. I believe it is important to show that analyses can still be derived, so that users are not discouraged by the increasing visibility of uncertainty and abandon the tools. A balanced message must be achieved to expose the limitations of GIS in a constructive way.

–          Madskiier_JWong

Image Source: Pang, A. (2001) “Visualizing Uncertainty in Geo-spatial Data” http://www.spatial.maine.edu/~worboys/SIE565/papers/pang%20viz%20uncert.pdf

Foody and Uncertainty

Thursday, March 8th, 2012

Foody’s article offers a fairly general overview of uncertainty and its impact on GIS analyses. I did not find this an enjoyable read as it seemed to touch on various implications of uncertainty on decision-making, but did not go in-depth on ways to handle uncertainty in a GI system. The chief insight brought out by Foody concerned the paradigmatic shift from absolute accuracy of data to its fitness or appropriateness to the research question.

            Sah mentions the importance of making uncertainty explicit at each step. To me, fuzzy sets and fuzzy logic (things can partially belong to different categories, or a percentage likelihood of being a certain class) seem one of the most intuitive tools to computer users to represent spatial uncertainty. “Stratified partitions” have also been used in other cases to track this uncertainty through different scales1. Additionally, fuzzy sets are most valuable for those transitional zones (which analyses tend to be most interested in for emerging phenomena) where uncertainty is highest. Despite these kinds of parametric solutions however, there remain ontological issues of deciding what categories are included in the fuzzy set. Given the increasing amount of data available through data mining, uncertainty needs to be handled in a more robust way that is more interoperable than fuzzy sets with predefined classes. Finally, to naïve users who aren’t familiar with the bits, bytes, and algorithms behind the scenes that drive classification and visualization on a GIS, is fuzzy logic an intuitive way of representing uncertainty?

–          Madskiier_JWong

1. Beaubouef, T., Petry, F. (2010) “Fuzzy and Rough Set Approaches for Uncertainty in Spatial Data” http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA530497&Location=U2&doc=GetTRDoc.pdf

Reviewing Geospatial Database Uncertainty with New Technologies

Wednesday, March 7th, 2012

The paper published by Hunter et al. in 1991 had categorized the challenges of uncertainty in Geographic Information Science (GIS) study into 3 types: definition, communication, and management. Authors begun with the presentation of error visualization, and discussed the uncertainty in visualization. Then, the approaches for error management in geospatial databases were pointed out, and the future research directions as well.

Authors also mentioned it was very helpful to look at the system logs for uncertainty determination. It might be a good solution at the time this paper was published, but currently system log analysis becomes a great challenge in pattern recognition study due to their high dimension. So the new question becomes: What is the tolerance for the uncertainty in system log analysis?

Uncertainty reduction and absorption were proposed as two solutions for error management. Authors utilized several good examples to demonstrate these two solutions. But with new challenges in GIS research (e.g., data intensity), these two solutions should be changed accordingly.

In their paper, authors also mentioned that the main reason that caused the poor utilization was the lack of confidence of the system, due to the fact that user cannot obtain enough information about the quality of the databases and the unacceptable errors. This might be true in the past, but nowadays Bayesian Network technique is utilized to handle this problem, as Lackey et al. presented in (Laskey, K.B., Wright, E.J. & da Costa, P.C.G., 2010. Envisioning uncertainty in geospatial information. International journal of approximate reasoning, 51(2), pp.209–223.)

–cyberinfrastructure

Uncertainty and Decision-Making

Wednesday, March 7th, 2012

I really like the article by Hunter and Goodchild (1995) because it thoroughly addressed the different aspects of uncertainty in GIS. Further, the various diagrams were effective in conveying and summarizing the authors’ main points. I especially appreciate the discussion concerning the Variation in Impact of Spatial Databases upon Decision-Making. The authors argue that in order to assess how much impact uncertainty will affect the decisions-making process, one must consider the type of decisions to be made. Hunter and Goodchild conclude that uncertainty should be most critically assessed when decisions “carry political, high-risk, controversial or global implications” (58). Although this seem obvious, it is important to remember the consequences due to inability to handle uncertainty is often confounded by persuasive discourse in the aforementioned types of decisions. Because there is usually high stakes attached to these sorts of decisions, politician or advocates are tempted to play any uncertainty within the product in a way that will support their position. Foody (2003) also recognizes this danger and warns, “emotionally colored language is used in relation to issues associated with a large degree of risk” (114). Furthermore, I would be interested to know more about how and to what degree does uncertainty impact decisions. Is uncertainty the most influential when presented early on in the decision-making process or near the end of it? Finally, does the depiction of uncertainty actually lead actors to make better decisions? Under what circumstances does it lead to worse decision?

There are a few things in the article that I still need clarifying. For example, Figure 4 puzzles me — why does knowledge about uncertainty decrease when a person transitions from the Learning Phase into the Knowledgeable Phase? Also, with respect to uncertainty absorption, the author claim data producers are “preparing data quality statements from which users may make their own evaluation of the suitability of data for their purpose”. Is this just the metadata of how the data was collected? Does this force the decision of uncertainty absorption onto the users? The way in which producers vendors can absorb uncertainty is still vague to me.

 

Ally_Nash

Learning to Read Uncertainty

Tuesday, March 6th, 2012

Foody separates the concept of uncertainty into two types: ambiguity, which is defined as “the problem of making a choice between two or more alternatives” and vagueness, which is defined as “making sharp or precise distinctions” (114). Personally, I think this distinction confusing without more concrete examples or further explanation between how other “terms” used for uncertainty is related to these two groups. For instance, is reliability an issue related to ambiguity? Since I have always thought of uncertainty in terms of a balance between accuracy and precision, am I right to assume accuracy falls under the ambiguity type of uncertainty and precision falls under the vagueness type of uncertainty? Also, it would have been helpful to explain the “sorites paradox” (114).

The article, however, did spark my interest in how naïve users perceive uncertainty especially regarding topics addressed by public policy. Like the author mentions, uncertainty is inherent in data. But even if uncertainty were conveyed through GIS, how would people interpret them? Does “reading” uncertainty require a steep learning curve? Does the representation used to convey uncertainty have an effect on the faith readers place in the conclusion of the study? The image above shows the different ways in which dots can be used to depict uncertainty (MacEachren et al., 2005). It would be very interesting to see what kind of biases we have toward different methods of representation. Perhaps our biases will lead us to believe uncertainty is less “significant” if it is shown by different shades of one color compared to two different colors. Empirical studies from human cognition and geovisualization will be valuable to answer these questions.

Ally_Nash

MacEachren et al. (2005). Visualizing Geospatial Information Uncertainty: What We Know and What We Need to Know. Cartography and Geographic Information Science, 32(3), 139-160.

More to Uncertainty

Tuesday, March 6th, 2012

A point I mentioned in the previous post on uncertainty mentioned the interest I had in Foody’s point on zoning.  In his article entitled “Uncertainty, knowledge discovery and data mining in GIS”, he discusses how uncertainty is inherent in geographical data, and furthermore, suggests it is compounded by the way we use our data.  He says, “[c]hanges in administrative boundaries, for example, significantly handicap population studies, with incompatible zoning systems used over time”.  I said in my last point that it appears uncertainty is another challenge that may be mediated by making assumptions explicit.  I would like to examine this point further.

In saying that explicitness can mediate this problem, I don’t assume that merely laying out steps can “fix” error and uncertainty, but rather, by understanding, accepting, and most importantly, working with these issues at each step is incredibly important.  Similarly to the issue of scale, uncertainty is something that (while not “applied”, as scale is) at least occurs at almost each stage of working with data—observation and analysis.  This can be problematic because of the possibility to compound “layers” of uncertainty at various stages, resulting in even greater amounts of uncertainty by the end of the project.  So I believe by recognizing uncertainty at each scale, it becomes much easier to work with.  And as Foody notes, GIS can be thought of as “a means of communicating information”, which can extend to a means of communicating error.  However, I also believe that by understanding and communicating uncertainty and error, it can lead to better formulated and deeper questions, which may make use of error and uncertainty in data and factor that into the question itself, similarly to how errors in zoning with regards to scale can be used to illuminate beyond the surface of the data.

sah

Becoming Comfortable with Uncertainty

Monday, March 5th, 2012

In Hunter and Goodchild’s essay entitled “Managing Uncertainty in Spatial Databases”, one statement in particular regarding uncertainty and error in data really hit home: the idea that people don’t understand the error in their data—and I would add, don’t ask about it, either.  This also recalled for me something Ana presented in her talk about LBS—that people don’t always understand or appreciate the data and technologies they are working with today—even perhaps the experts who know the complexities of data and technology that generate information we work with, and potentially take for granted, today.

So for me, amidst all their discussion of identifying, working with, and explaining/understanding error in data, their goal for future research stood out.  “Future error research cannot stay confined to the academic sector and should be conducted jointly with the user community to reflect the need for solving management aspects of the issue”.  The emphasis they place on being able to manage error is also incredibly important, and should also be at the forefront of this integration of research into the user domain, particularly as the user domain does not necessarily mean experts working in the field, but can be any layperson with access to the Internet.

This argument is particularly important as they express in their semi-ironic Figure 1, demonstrating how the experts are the ones who generally know how to deal with data, and what to ask, but don’t need to ask, whereas the layperson does not know how to deal or what to ask, but unfortunately is the one in need of those answers.  When reading Foody’s take on uncertainty, it further highlighted why lack of understanding in users can be negative, outside of creating questionable results—people may choose not to act at all with information they are uncertain about.  As Foody mentions with the global methane sink, without an understanding of how something works, it leads to uncertainty in models and results, and no action being taken.  This also seems a largely important point when arguing for the increased understanding of not only what uncertainties are present, but also what we can do with uncertainty in data.

This particularly reminds me of another class I am taking, in which we are discussing uncertainty in the media with regards to climate change.  Media sources depict scientific knowledge and models to be wrought with uncertainty—and as the public, policy makers, and other “end-users” don’t understand how scientists work with uncertainty in data and models, they are likely to be unreceptive of results and recommendations.

Of particular interest to me, Foody also discusses uncertainty inherent in geographical data, and the issue of zoning and the placement of administrative boundaries that can influence analysis on population data—it appears with uncertainty it is just another problem where assumptions must be made explicit.

sah

LBS and User-Centric Design

Friday, March 2nd, 2012

Location-based services (LBS) have already been widely utilized in daily life. By this means, users become both geospatial data providers and the information consumers. In the paper of Jiang et al. 2006, authors point out the LBS should be designed in a user-centric way. As LBS is a developing research field that includes the study of geospatial cyberinfrastructure, information technology, social theories, and data mining, we should take a careful look at the user-centric design in order to improve LBS.

Mobile technologies have contributed a lot geospatial data for LBS. With the development of wireless network, geospatial data, including image data, test message, voice data, and spectral information that are collected with different mobile sensors, can be easily shared over the Internet. But the large data volume becomes another challenge in LBS research; especially of user-centric design. First, not all the data contributed by user are equally useful for knowledge discovery and decision-making. So data mining techniques are necessary and it should also be supported by the geospatial cyberinfrastructure which is not directly visible for end users. Secondly, due to the large scale data and their temporal attributed, real-time computing are usually utilized to guarantee the performance of LBS is satisfying. Moreover, the limited resource of mobile devices require geospatial cyberinfrastructure at the backend to provide functionalities such as data storage, statistical analysis, visualization, to name a few examples here. All those functionalities should be kept transparent to the users, which further complicated the user-centric design research in LBS.

Another point I want to indicate here is the lack of standard criteria for evaluating LBS. As new technologies bring pervasive computing concepts in LBS, how to measure and evaluate the performance of LBS systems are great challenges in the future study.

–cyberinfrastructure

Scale – more complicated than we thought

Friday, March 2nd, 2012

I realised after reading the 3rd page of the article that there actually had not been an attempt to define ‘scale’. No matter, I’m guessing from the reading that it is the name of a class of objects from the ontological perspective (did I get that right?). What I disagree with though (with my own limited experience in remote sensing) is the statement that “pixel size is commonly used as an approximation to the sampling unit size” (page 3). People doing remote sensing a very aware of the limitations of the ‘resolution’ of the data they have, and know that a pixel will often contain the sample plus data about whatever is surrounding it. When trying to extract the signature of an object that is smaller than a pixel, it is unlikely that an analyst would come to the simple conclusion that all the data in the pixel it resides in represents that object.

 

The section on multivariate relationships and other problems with changing components of scale was interesting, but a little worrying. This is one of the reasons why geographers need to exist. The article says that “prior to a field of study, one should check than n provides enough power for detecting the hypothesized pattern, given the anticipated size of the [spatial lag] effect”. What do we do when n is very limited in availability anyway? It’s certainly a good idea to maximise n, but more often than not, data collection is limited by budget, time, and the number of subjects itself. There are other problems with trying to account for all the guidelines in the considerations section.
What really echoed with me was the conclusion, that “there is not one ‘problem of scale’, but many”. What does this mean for those in the general public wishing to do geographic analysis in things such as PPGIS with VGI? What we need is more explicit documentation in methods on all the components of scale. Without this, it will be difficult to comment on the accuracy of studies

 

-Peck

Experimental Design and Scale

Friday, March 2nd, 2012

Dungan et al.’s paper on scale introduced me to one new concept in particular, that some general guidelines could be followed to determine what might be an appropriate scale for a sample or experiment and what conclusions about processes could be drawn from the chosen scale. This concept is particularly important to me as a physical geographer as I have been reading many papers lately for a class on global biogeochemistry where global systems are examined and processes are inferred from experiments of different magnitudes, local and small or combining datasets from many areas. In the remaining readings for this course, I will now be more aware of the scale at which an experiment or study was conducted and will have a better idea when discussing the feasibility extrapolating or interpolating conclusions reached to other scales.

 

-Outdoor Addict

 

Privacy

Friday, March 2nd, 2012

I have heard much about location based services lately and prior to reading this paper had merely thought of it as Google Maps on a smartphone. It was interesting to read more about it and the issues it faces particularly that of privacy and surveillance. As my pseudonym suggests, I enjoy being outside not merely because it is nicer than being inside but partly because nobody knows where I am when I go outside and go hiking or canoeing etc. As mentioned in class, few people recognize location privacy and freedom and fewer still realize is being lost.

Before the advent of cell phones and the internet, if someone wanted to find out where you were going they needed to ask you. Today, they may not need to. They could just see if they have you on Foursquare, if you’ve updated your Twitter location or posted your plans of Facebook. Who says they even need to be good friends to be able to do this? You may have met them once or twice at a school activity etc. and thought you might want to stay in touch with them but they now have access to a huge amount of information about you and particularly your location. This leads to the importance of privacy not only in relation to strangers but perhaps also to acquaintances; you may have met them but do you really want them aware of your every move, literally?

Privacy with respect to strangers, institutions, governments etc. is even harder to obtain as these bodies do not need to know you to obtain your information. In the case of governments and institutions, they may obtain it by attempting to order it be given to them by the companies. In the case of strangers, hacking is not uncommon. It seems to me with one’s location being updated and distributed online or even via a phone, it would be all too easy for someone to follow that person, learn their habits etc. and could be dangerous to their personal security.

The Yao article mentions there are efforts in place to try to create frameworks for increasing security but that these have so far been limited with few studies performed. I feel that much more time and energy should be put into increasing security but that perhaps users should be educated on privacy issues from a young age when they begin using LBS technology so that they are aware of exactly how accessible the information they post online may be to those who are looking for it.

-Outdoor Addict

Disappearing Buildings!

Thursday, March 1st, 2012

The two assigned topics for Friday’s class are relevant to one another. It can be so frustrating when trying to search for a place, store or location on a smart phone. Depending where you are, how zoomed in you are and especially the spelling, the results can have huge amount of variance.

The other day I was searching for The Bell Sports Complex in Brossard, but my phone was incapable of finding my query. I had successfully performed the search before, but for some unknown reason, it no longer existed. Perhaps the disappearance of the Habs’ practice facility could explain their recent woes…

Jiang’s article on LBS provokes a question: Does a feature in a landscape possess different coordinates if it is a point, or does it have different extents (if a polygon) depending on the scale? For example, at a global scale, Montreal might appear as a point feature, but at a larger scale it may instead be a polygon. Does each map of a certain scale possess the address and location for different features? This seems redundant; but perhaps necessary right now. I can imagine that one feature could potentially hold different types of representations. When a certain type of representation is required, it could simply be called upon instead of having repetitions within the database.

I am sometimes concerned about privacy with regards to LBS, but more impressed with how internet searches have become more efficient with the integration of LBS. There are positives and negatives, and at this point, I’m not so concerned with people knowing my exact location. I really enjoy how in GEOG 201, it was mentioned that Google’s goal was to integrate all searches into a map-like interface. Four years later, I can definitely see this as a possibility. It was a little foreign to me at the time, but I am able to see that almost everything has a spatial component to it.

Andrew

Thinking About Scale

Thursday, March 1st, 2012

I agree with cyberinfrastructure and henry miller in their thinking about how scale is presented in the paper written by Dungan et. al. The authors of this paper primarily provide examples from ecology although they do discuss and provide context from other fields. I too think we must be careful in paying attention to what field we are working in when we think about the term scale.

My first introduction to the concept came from a political ecology class I took, where scale could be used outside of just its connotations in physical space and time. Scale, in this context, could be used to think about government, human communities, academic disciplines and more. Of course, political ecologists might often be more concerned with power relationships and how these relationships flow across different scales than we are in this course.

But, since we are looking at this in the context of GIS, I thought one interesting blog post that helps to make one of the same points as the authors of this article might be worth sharing (the pictures do it for me). Scale, just in a physical sense, does matter incredibly when investigating landscapes or in thinking about maps. As a human geographer, the author’s points about the sample size of scale also holds a lot of implications when thinking what is the appropriate scale to study human subjects or their communities on. As cyberinfrastructure notes, we should be mindful of how scale might adjust our methodologies or observations by paying attention to scale itself. But, I would argue that we also need to think about what discipline we are working in (and its definition or varying usages of scale) when we consider scale shifts and how it might affect our research.

-ClimateNYC

Motivations and LBS

Thursday, March 1st, 2012

I really enjoyed reading the article by Jiang and Yao. It was incredibly informative and set up a great framework for me to use when thinking about LBS in the future. The authors mention that “[c]lustering the users in terms of interests, behaviors and personal profiles is an important step towards a better understanding of the users” (715) and discusses grouping users based on the amount of information they desire. I think it would have been insightful to also note the different motivations behind locational-sharing.

A recent New York Times article claims that LBS, despite enthusiasm from investors, have yet to become very popular among users. I think being responsive to users’ motivations for location sharing will be important for LBS gaining more popularity. For instance, an app designed for individuals to share their locations on social networks should understand that one of the reasons people do so is for reputation management. People will share locations that present them in a positive light (e.g. popular restaurants) and keep other locations (e.g. casino) secret. Since individuals are selective about which locations they want their friends to see, this group will not be receptive of an LBS that constant tracks their movements. With regards to the temporal resolution of data, individuals may not want to share details about the duration of their stay at any one location. Other motivations for locational sharing could be fun/gaming and earning “badges” and to discover new places in town (e.g. Yelp).  Further, Lindqvist et al. (2011) “did not find that discounts and special offers [to be] a strong motivator for checking in” for users on Foursquare (a social location-sharing service). However, the authors note that if more business used the service, this could change.

Thus, motivations among naïve users may be useful for developing LBS that are more specialized and responsive to specific needs. These considerations will in turn shed useful insights on the types of privacy settings that will be most appropriate.

Ally_Nash

Lindqvist et al. (2011). I’m the Mayor of My House: Examining Why People Use foursquare – a Social-Driven Location Sharing Application.