Archive for the ‘General’ Category

Visualizing uncertainty

Thursday, April 4th, 2013

MacEachren et als article provides a thorough overview of the current status of uncertainty visualization along with its future and its challenges. It seems to be established that uncertainty visualization is more useful at the planning stage than at the user stage of an application. This makes me think back to an earlier discussion on temporal GIS. We talked about how the important aspect of temporal GIS was in its analytical capabilities, rather than in its representational capabilities. While I do not deny the positive effect on analysis that visualization might have, I question if it should be the aspect of uncertainty that is given the most attention.

Two of the challenges proposed by the article are developing tools to interact with depictions of uncertainty and handling multiple kinds of coexisting uncertainty. Might representation in some instances prove more troublesome than its worth? Might representational practices at times be obfuscative of data that might be understood as just data? I want to note that I am asking these questions in earnest, not rhetorically. Which I guess boils down to a question I have probably asked all semsester: how do we evaluate what is important enough or useful enough to invest time in?

Wyatt

GIS&RS

Thursday, April 4th, 2013

Brivio et als paper presents a case study integrating Remote Sensing and GIS to produce a flood map. After explaining methodology and results of other methods, the paper finds the integrative method to be 96% accurate.

This speaks to the value of interdisciplinary work. While RS applications on their own proved inadequate, a mixing of disciplines gave a fairly trustworthy result. While I understand the value of highly specialized knowledge, having a baseline of capability outside of one’s specific field is useful. I remember in 407 Korbin explaining that knowing even a bit of programming can help you in working with programmers, as understanding the way that one builds statements as well as the general limits of a given programming language will give you an idea of what you are can ask for. The same is true for GIS/RS. Knowing how GIS works and what it might be able to do is useful for RS scholars in seeking help and collaboration and vice versa. I think McGill’s GIS program is good in this respect. I got to dip my toes into a lot of different aspects of GIS (including COMP) and figure out what I like about it. If I end up working with GIS after I graduate, I know that the interdisciplinary nature of the program will prove useful.
Wyatt

Time or Space

Thursday, April 4th, 2013

Geospatial analysis can be no better than the original inputs, much like a computer is only as smart as its user. In the field of remote sensing, this ideology may be on its way to becoming obsolete. Brivio et. al show from a case study of catastrophic inundation in Italy that they can compensate for the temporal disparity in the capturing of remotely sensed data and the peak point of the flood, a few days before.

The analysis, however, was not completed with the sole use of synthetic aperture radar images. Had it not been for the integration of topological data, it is unlikely that one would be able to obtain similarly successful results.

With any data input, temporal or spatial resolution are limiting factors. Brivio highlights this by acknowledging the use of NOAA thermal infrared sensors, which have a finer temporal resolution, while lacking in spatial resolution. Conversely, the SAR images used in the case study analysis have a relatively higher spatial resolution, but produces longer temporal intervals.

Given Brivio et. al’s successful prediction of flooding extent, it may mean that, if need be, it is advantageous to choose an input with a finer spatial resolution in exchange for a coarser temporal resolution, complementing the temporal delay with additional inputs to compensate.

Break remote sensing down into it’s two main functions: collection and output. One will inevitably lag behind the other, but eventually the leader will be surpassed by the follower. Only for it to happen again some time down the road. Much like two racers attached by a rubber band.

What all of this means for GIS; eventually the output from remote sensing application will surpass the computing power of geographic information systems. At which point, the third racer, processing, will become relevant, if he isn’t already.

GIS and RS: how do we account for variability?

Wednesday, April 3rd, 2013

Brivio et al.’s article “Integration of remote sensing data and GIS… for mapping of flooded areas” presents the very common process of using RS data and GIS  to map flooding and flood plains. Although the article presents how the integration of RS and GIS can accurately map a flood with a concluded method  accuracy of 96%, it only looks at a single event and study site. From my experience, this is not always the case, as  integration methods, even if they are the same, often vary in accuracy from one location to another. Furthermore, event duration, intensity and geologic substates often interfere with flood area prediction from RS data and GIS, as variations can modify water location within minutes to hours. To clarify, one area may be flooded at certain points during the flood period while during other periods dry (i.e. it may transition from wet to dry to wet), which interferes with accuracy of the RS data and GIS prediction. Fundamentally, water changes how the surrounding environment reacts, modifying where floods are. As floods react to the environment, often areas become flooded for only minutes and as such, are never recognized as a flooded area, in both GIS predictions and RS data, as well as human reports (although they were flooded; but only for minutes).

To better predict flood area, TWIs (topographical wetness index) and DEMs (digital elevation models) when compared to flow paths (cost-distance matrix), may in fact, better predict flooded areas when used in conjunction with RS data then just the integration of RS data to cost-distance matrixes. In addition, more data sets and studies would further help to create a more general integration protocol and predictive area estimates for floods. To elaborate, the techniques in the article work well on the study area by may not work on other floods, therefore by adding more data from more types of floods, the technique could be adapted to other situations. The result of multiple integrations with multiple data sets would also reduce error and produce greater accuracy. The “Big” question, however that will still remain unanswered from this article is: how can we account for ecosystem and flood variability within GIS and RS data sets?

C_N_Cycles

geocode all the things

Friday, March 22nd, 2013

Goldberg, Wilson, and Knoblock (2007) note how geocoding match rates are much higher in urban areas than rural ones. The authors describe two routes for alleviating this problem: geocoding to a less precise level or including additional detail from other sources. However, both these routes result in a “cartographic confounded” dataset where accuracy degrees are a function of location. Matching this idea — where urban areas and areas that have been previously geocoded with additional information are more accurate than previously un-geocoded rural areas — with the idea that geocoding advances to the extent of technological advances and their use, we could state that eventually we’ll be able to geocode everything on Earth with good accuracy. I think of it like digital exploration — there will come a time when everything has been geocoded! Nothing left to geocode! (“Oh, you’re in geography? But the world’s been mapped already”).

More interesting to think about, and what AMac has already touched on, is the cultural differences in wayfinding and address structures. How can we geocode the yellow building past the big tree? How can we geocode description-laden indigenous landscapes with layers of history? Geocoding historical landscapes: how do we quantify the different levels of error involved when we can’t even quantify positional accuracy? These nuanced definitions of the very entities that are being geocoded pose a whole different array of problems to be addressed in the future.

-sidewalkballet

There Should be an App for That

Thursday, March 14th, 2013

First of all, expectations are always going to fall either short or long of reality. Rarely, if ever, does anyone get it spot on. Consider the predictions published in 1899 of what the year 2000 would look like (http://gizmodo.com/5939765/what-people-in-1899-thought-the-year-2000-would-look-like). Aside from the fact that everyone is wearing shoes, and heavier-than-air human flight has been developed (in a way), they were dead wrong. The same can be said of the opening statement of Stein, in which he states “location-based services has fallen somewhat short of expectations.” They have come a long way since their infancy, and are continuing to grow. Chances are, development will slow, or cease, due to us running out of time, and not because the perfect device has been created.

Location based services and GIS do not share an evenly balanced relationship. One side takes, while the other side makes. In this case, GIS is responsible for “offer[ing] a range of mapping services and geographically oriented content.” Location based services then take the content and distribute it accordingly. That does not mean that GIS will eventually deplete it’s supply of data, but location based services will become increasingly dependent on higher quality, more diverse, and increasing update rates of data. If a location based service asks the user for information, a GIS is told what the user is interested in regardless of where the analysis is being performed. Furthermore, GIS users have far more control over the spatial data, compared to location based service users. That is, until GIS software is embedded with location based service capabilities, allowing for it track the location of it’s users. Here’s an idea, in the event that GIS platforms become sufficiently portable that software can be taken mobile, a location based service could suggest shapefiles for analysis given previous use habits, and the current location of the user, allowing them to validate their results in real time. There should be an app for that.

AMac

Temporal Topology

Thursday, March 14th, 2013

Location, size, and proximity are just three of many characteristics a feature can be attributed. As complex as they are, the topology and relationships are absolute. Before reading this article I thought it was just a matter of applying the concept of a temporal relationship in a similar manner. I still believe that this is possible. For instance, the questions that the authors answer in Figure 5 could be answered similarly using the equivalent of “Clip” or Raster Calculator. It would be laborious, time consuming, and consist of a rigid framework, but one could still answer the question, “Which areas were fallow land during the last 20 years?”

The framework that Marceau et al. develops is much more dynamic, and thus all calculations can be completed before asking any questions, as opposed to asking a specific question and then answering it after numerous clips and overlays. Generating a user-friendly temporal-spatial model would be a big step forward in answering questions in the fourth dimension. Especially now, considering the ever increasing rate at which data is collected.

Like many problems with GIS, if the data was water and the processing was the pipe through which the water must pass, there will always be a limiting factor. The author’s are of the opinion that spatio-temporal data set availability is lacking, but make progress in further widening the pipe. In the coming years I believe that the limiting factor will again become predominantly the processing of the data as spatial data is collected at an ever increasing rate.

In other news, did anyone else have trouble where the document was missing all text “fi” was missing?

AMac

A Temporal MAUP

Thursday, March 14th, 2013

Marceau, Guindon, Bruel, and Marois outline two major problems with the temporal model in GIS: the lack of temporal topology, and the sampling interval. The temporal interval determines the scale at which the geographic phenomenon will be studied, and consequently “may affect the perception of the pattern dynamics of the phenomenon” (p. 4). Ultimately, this leads to Marceau et al.’s explanation that some geographical changes may go undetected.

With this in mind, we can refer to the MAUP in a temporal context. Some trends will be missed depending on the borders of the interval, and false conclusions can be made if a temporal interval is too small or if the full range of years is inappropriate overall. We see this sometimes with global warming — people using global cyclical temperatures since the beginning of time to say that global warming is just another natural temperature trend because there were massive temperature fluctuations way back when. We have to be careful where we put our boundaries.

-sidewalkballet

visualizing time

Thursday, March 14th, 2013

Marceau et als article looks at the use of temporal GIS in a study of land use in St. Eustache. While the paper shows one way that we may incorporate time into GIS, it is only one fairly limited use. The paper’s twelve year old date is important to consider in a fair critique, and I commend the researchers use of available softwares and interfaces in order to move forward on temporal projects. Further, their goal appeared to be focused on ability to conduct spatiotemporal queries, rather than representation. While the former is probably the more essentially important part of temporal GIS, I’d like to talk about the latter.

The question of how to represent a temporal dimension in GIS is one that seemingly continues to stump geographers, and there doesn’t appear to be strong consensus on best practices. Dipto talked a bit about this below, and I agree with him that a useful area of thought in GIS should be how we might rethink the way we do Temporal GIS. How then might we move forward? Can a static image accurately represent time? And what of that data in between recordings? How can we utilize interpolation that is accountable to the purpose of our GIS?
My main question is: is it important that there be a consensus on representation? And further, what does a consensus mean to us in terms of epistemological and ontological concerns?

Wyatt

LBS, consent

Thursday, March 14th, 2013

Steinfield’s article on location based services gives a useful overview of prominent technologies, applications and issues related to the domain. Questions of privacy and ethics are raised in the article, but the date of the article means that the most pressing aspects of LBS and privacy have yet to arrive. Indeed it seems that Steinfeld does not forecast the ubiquity of smartphones that we’re experiencing in the present. With current context in mind, I want to briefly revisit some of the questions of privacy raised by the author.

Steinfield cites a set of principles regarding privacy: Notice, Choice, Consent, Anonymynity, Access and Security. With these in mind, I started thinking about what kinds of options we had in terms of communication today. While it is certainly possible to live without a cell phone, it is pretty rare and largely inconvenient, especially amongst my generation. It is expected that we be reachable at all times, and I have heard employment counsellours telling clients that a cell phone is pretty necessary to get a job. I don’t have a smart phone myself, but most of my friends do, and they’ve become a less expensive option than many more basic models of late. But when we opt-in to a smart phone, does that mean we have to (to borrow lazily from Gramsci) consent to our own domination? Is it just that in order to be successful, to be able to communicate, we have to give up a large part of our privacy? Does this model of consent really respect the needs of all parties involved? Does it matter?

Wyatt

Spatial cognition, ontology, epistemology

Friday, March 1st, 2013

Tversky, in his paper, divides spatial cognition into three “spaces”: navigation, surrounding the body, and the body. Reading this article brought to mind previous discussions in our classes with respect to ontology and epistemology. While the article gave a series of examples of each type of spatial cognition, they were mostly rooted within a Western Academic framework. It would be interesting to extend this discussion of spatial cognition to the ways in which it is variable.
I think that the way we think about space is highly structured by our environment and culture. That is to say that the way we order the environment is culturally located. The space of navigation is an easy place to see this difference. I remember in an earlier GIS class talking about the house numbering system in Japan. Wikipedia explains:

“In Japan and South Korea, a city is divided into small numbered zones. The houses within each zone are then labelled in the order in which they were constructed, or clockwise around the block. This system is comparable to the system of sestieri (sixths) used in Venice.”

Even this small detail will have bearing on the space of navigation. While I feel confident navigating the Canadian street system, I would be lost in this different system. I think that it would require that I think about space and spatial relationships in a new way. My spatial cognition is rooted in local understandings. Thinking of this in terms of GIS work, I think it is important to keep in mind the ways that we think about space in our work and how that accords with the people we are producing GIS with and for.

Wyatt

Will You Volunteer?

Thursday, February 28th, 2013

Goodchild’s article does a great job of giving an overview of the history, components, and some of the uses of Volunteered Geographic Information (VGI). Though he does a great job of highlighting the many benefits to this huge source of data, he also acknowledges some of the issues that arise with dependency on this type of data.

The are several issues in particular that I believe affect the future of the field. First of all, standardization of data is an issue when dealing with volunteered information. Contributors may not know the correct way to upload and cite data, which in turn could affect the results. This issue has been addressed somewhat by the use of volunteers who monitor the data, as well as agencies that have outlined the way to standardize certain types of data. Another issue is the ability of certain user to undermine the collective effort. This issue in particular is ever more relevant as larger and larger databases are compiled. Although it is generally accepted that contributors are working together for the collective good, there is a possibility that some people, with ulterior motives, could undermine the collective effort.One example of this is when anonymous users tamper with Wikipedia pages. Wikipedia allows any user to edit the content of its pages. And while there are some volunteers who monitor pages for legitimacy, there is a possibility of people propagating false information.

Overall, VGI has the ability to be a very useful field for current and future collective projects. However, there are still some issues that need to be addressed before it can be relied upon for important policy decisions.

-Victor Manuel

Living in a Virtual World

Thursday, February 28th, 2013

As I was reading through Richardson’s article, I kept thinking to myself time and time again- why aren’t Virtual Environments and effective tool for learning the layouts of real environments? It stands to reason that if the real environment is reproduced at a digital level, a test subject should be able to gain a similar amount of knowledge about the environment as a person who walked through said environment in real life.

Therefore, as the authors outlined some of the limitations of a VE, I started to brainstorm how an accurate and effective VE could be constructed and displayed. One of the main issues withe using VE as a learning tool was the alignment effect- user of the VE could become disoriented, especially when rising sets of staircases. One potential solution to this conundrum could be the creation of a sort of “immersive” virtual environment, which visually surrounds the user. This could be achieved on a relatively portable scale through the use of some sort of “full experience” headset, which would make it appear as if the user is immersed in the real environment. Overall, the paper raises very though provoking questions about the limitations of Virtual Environments; especially how they are still not a viable substitute to experiencing said environment in real life.

-Victor Manuel

On Academia, Industry and Assumed Value Neutrality

Thursday, February 28th, 2013

Reading Coleman et als’ paper, a useful piece examining VGI participants and their motivation, brought forth, for me, one of my bigger pet peeves: the idea of value-neutrality (and proficiency) within academia. Let me explain. In the list of motivations to contribute, the authors identify three negative motivations: mischief, agenda, and malice and/or criminal intent. While the article by no means classifies these motivations as specific to VGI, their placement sets them symbolically apart from that knowledege produced by experts. By positing these negative uses as illustrative of VGI as non-neutral, I read an assertion of value neutrality into the domain of experts.
I recognize that the rigourous demands of a publishing process cannot be ignored, and unquestionably account for a higher quality of data production within academic and professional realms. This does not mean that they are perfect, nor does it mean they are without agenda. Agenda is not always explicit, and I argue not always even conscious. However, the lay reader of an academic paper believes it to be value-neutral. All the while, VGI is seen as never trustworthy. Let us bring this to the domain of GIS.
We trust the professionals at Google maps and the peer-reviewed GIS paper, but not at OpenStreetMaps. Both producers and produsers have to make decisions when they input data. We know that in spatial representations, it is easy to lie and it is easy to produce hierarchies. In fact, it is difficult not to. The difference between VGI and professional GIS is that people expect the former to do so and the latter to abstain. However, Google has to make money, and the academic has to be published, and they can mold their data to this end, as can their editors and publishers. I guess what I’m asking in the end, is where can we make a useful critique of VGI that takes into account the unreliability of all data? How to we introduce accountablity into academia, industry or VGI?

On the question of mischief, well, that one happens too. See here

Wyatt

VGI and the POWER LAW!!

Thursday, February 28th, 2013

Coleman, Georgiadou, and Labonte (2009) state that VGI causes a “more influential role [to be] assumed by the community” (p. 2). That’s great! But — is this influence level across the playing field of the “produsers” they talk about? Ross Mayfield’s Power Law of Participation says no.

WHERE DO YOU FIT???

WHERE DO YOU FIT???

 

As a produser, we fall somewhere along this graph which indicates our respective influence in the application, according to Mayfield. This Law affirms one of the fundamental characteristics of informational ‘produsage’ outlined in the article: the environment allows for fluid movement of individuals between different roles in the community. You can move along the Power Law graph whenever you want.With this in mind, we must consider who is located in each part for different participatory applications, and whether the produsers comprising the high engagement-collaborative intelligence are a good representation for the application’s purpose. After CGIS, power comes hand-in-hand with thoughts of who is being left behind; who is not being represented by the high engagement community.

The article provides a succinct overview of VGI, some of its applications, categories of users and their motivations, and potential data issues. Where does VGI fall short? In a world where collaboration and public participation see increasing popularity, will we be able to solely rely on VGI in the future? True, popularity != credibility — we still need to look at the holes in the maps.

 

-sidewalkballet

A model for your mental map

Thursday, February 28th, 2013

Tversky et al’s explanation of mental spaces as “built around frameworks consisting of elements and the relations among them,” (516) reminds me of an entity relationship model. The mental framework we have could consist of:

- Entities in line with Lynch’s city elements, and touched on in the Space of Navigation

  • Paths
  • Edges
  • Districts
  • Nodes
  • Landmarks

- Relationships to associate meaning between entities

  • Paths leading to landmarks
  • Edges surrounding districts

- Attributes distinguishing the characteristics of an entity

  • Significance of a landmark
  • Width of a path (maybe depicting how frequently it is used for travel opposed to actual width)

I would have liked this article to have a greater theoretical grounding within GIS. I struggle to see what cognitive maps can be used for in a GIS framework, but with this simplified schema in mind, can we translate these cognitive maps into usable data in a GIS? Maybe, but I think we would have to be very meticulous to grasp the nuances in spatial perception and cognition, and therefore the relationships between entities.

Cognitive mapping methodology stresses the importance of debriefing after the maps are made. Discussions must be held in order to begin to establish reasoning regarding why what things are placed in certain locations, why things are deemed to have greater importance, etc. I don’t think that a simply digitized cognitive map will serve much purpose (as a pedagogical tool or otherwise) without knowing the meaning behind it. Each user will have different experiences leading them to perceive different things—things that I don’t think we can make much sense of without dealing with the nitty-gritty relationships of entities.

-sidewalkballet

Explorations in the Use of Augmented Reality for Geographic Visualization

Thursday, February 21st, 2013

There is a small but significant difference that could make augmented reality boom or bust when it comes to GIS. It is the same problem that architects and engineers once faced as well. Only with the advent of computers and monitors were they able to rest their neck and sit down in a chair instead of hunching over a drafting board all day. GIS, for the most part, wasn’t subjected to such a fate.

Augmented reality could change that. Even now, similar displays are available to the public in shopping malls and showrooms, using the same table top, infrared projector method outlined in the article. What sets the visitors apart from GIS users is that they only use it for a couple of minutes at a time. As any GIS user knows, geospatial analysis rarely takes a short amount of time.

In light of that, augmented reality will need to make the jump from top-down to heads-up display before it makes significant inroads into the industry.

What part of the methodology that left something to be desired was the need for the user to place a flash card down on each section of the table that they wanted to view supplementary information at. Why not just display all the data at once? If it’s a matter of computing power, that is a simple fix. If, however, it is intrinsic to the software framework, it would greatly benefit the project if, instead of viewing a small section of a large map, the exocentric viewpoint was zoomed in to a smaller…bigger(?) scale so the data took up the extent of the display. After all, whens the last time you squinted at a map of the island of Montreal when trying to figure out how far your house is from the nearest depanneur.

AMac

Critical GIS: Ethics, a Ghost of the Past

Thursday, February 21st, 2013

Robert Lake’s article “Planning and applied geography…” take the idea to have transcending ethics between field to the extreme. I believe that the type of ethics, or extent, is unique to a field of study and common and should not be pushed into areas where grey zones outnumber the black and white. This article seems to try and force the idea of practitioners as absent minded of ethics, void of the knowledge of technology’s impact on society. Maybe it is my “laissez-faire” attitude or ideals of “I do not care what you believe in, but just do not push it on me ” that is speaking, but I do not believe practitioners have forgotten ethics and their applicability to structuring research in the digital realm. I would argue that it is how the ethics are applied that has changed and is causing this misunderstanding. For instance equal access to GIS data is not truly flawed, as inferred by Lake, as this data can be altered by user and re-published as a modified version, i.e. multiple users can use the data and modify it for themselves to create multiple ethical data sets, that correspond to the user’s ideals and background.

When Lake talks about a means to an end, this is a theoretically flawed assumption, because any good researcher or user of GIS knows that there is no end only a variable set of conclusions that lead to more elaboration of data and a refinement of GIS systems. I personally consider GIS a dynamic tool for representing geographical data in a changing world. Furthermore is it not the idea to show the variety of data from differing backgrounds during analysis to create a mosaic of geographic data that can lead to new discoveries.

The way this article is written and the way GIS and the application of ethical thought are paired, seems disconnected to reality. To clarify the Ethical ideas that Lake speaks of are the old way, a ghost of past thought. Ethics, I believe are considered in a new way, a way that was never considered to older generations of researchers at the time. Ethics of how GIS is used is more loose today, as a global society with a million views cannot be held to the archaic structures of Freudian dynamics of how research is done and how the tools are used.

C_N_Cycles

Hedley’s AR

Thursday, February 21st, 2013

**a quick post because wordpress ate my last one**

Hedley’s piece on AR provides a clear and pretty interesting, if dated, look at augmented reality, evaluating the merits of different interface designs. Eleven years on, it is interesting to look at how far AR has come.

A quick look at wikipedia shows a lot of different applications. While most of them are emblematic of everything that is wierd about the economy these days, some piqued my interest as actually pretty valuable. One such thing was workplace apps. Wikipedia explains: “AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms”

While I could certainly put on my Critical GIS hat and problematize this on a number of grounds, I find it pretty exciting. I think that especially in a field like geography, the use of AR could make collaboration over space a lot more effective. Maybe I am drawn to it because it brings to mind my favorite geography term “reducing the friction of distance”; and that it does!

Wyatt

Ontology in Augmented Reality

Thursday, February 21st, 2013

Reading through the paper by Azuma I could not help but get a little excited about all the sorts of AR applications we will see within as little as 5-10 years.  I envision video games that allow the gamer to feel like they are directly in and interacting with an environment by projecting it in their house.  I also see travelers wearing glasses and getting a tour of a foreign city without the help of a guide.  However, there are obviously a few limitations before Augmented Reality takes these jumps.  The one I want to focus on is User Interface Limitations.

This essentially comes down to how to display and allow interaction with the massive amounts of data that we have access to.  The amount of information that we could potentially display on a pair of glasses is astronomical in my mind.  But, how do we go about deciding what information to display, and how to display it?  To me, this comes down to an individual’s ontology of space.  Take my previous tour guide example; one person may want to know where all the museums in a city are while another would prefer to have the best bars in the area.  This is a bit of a trivial example, however it highlights how it may become a bit difficult to take this amazing technology and make it equally useful for everyone.  While this is an issue today, I agree with the paper in that there will likely be “significant growth” in the research of these problems.  It is now a matter of putting in the time, effort and money into improving the ubiquitous use of these AR systems.  With the great potential for business growth (e.g.), I do not see this being a problem.

-Geogman15