Archive for February, 2012

Evolution and Emergence of LBS

Wednesday, February 29th, 2012

The challenges of LBS are incredibly interesting, and seem to me to encompass what many of the challenges within GIS are today: the limitations of the hardware, and the limitations of the user.  I particularly liked the term “naive user”, implying to me not just that the user doesn’t understand, but that they are adaptable to the technology available.  This coincides with the idea that context is important for LBS because of how the data is displayed and how people are taking it in.  The language, the visualizations—user interface seems like a highly evolving and necessarily important field.

Previously in class we were discussing how maps are evolving to meet the needs of users, as opposed to having users bend to the will of the map, so to speak.  I see LBS as a form of Maps of the 21st Century.  Constantly evolving with, contextualizing, re-contextualizing, adapting, and shaping the world and users around it, LBS takes the qualitative data and attempts to reinterpret it in a manner accessible and useful to many users.  However, I do agree with Madskiier_JWong when they suggest the user is in many cases passive—while technologies are working to evolve to needs of the user, it would appear that on-the-ground, the user is in many cases taking what is being provided.  It will be interesting to see how the technology evolves to incorporate real-time demands of multiple users and presents them uniquely to the variety of consumers.

Looking into LBS it seems that while an upcoming field, in practical application for the everyday user, it is still quite new, and just beginning to catch on with people, with regards to programs such as foursquare, the friend finder for mobile phones.  To me, this lack of immediate uptake on some forms of LBS references another important limitation the authors spoke of, that of privacy.  And not even necessarily that people can see where you are going and collect information from you, but that it is being built into devices where the default is to track your movements—it is not necessarily something you must seek out yourself.  I think as it gains popularity, however, the privacy issue will come to the forefront, and like with the internet, users will become more aware of their rights, how to properly protect their privacy, and where to draw the line.


Dungan et al. on Scale

Wednesday, February 29th, 2012

I thought Dungan et al did an excellent job demystifying the concept by clearly defining the various terms that scale has been related to. However, the idea and significance of “support” is still very unclear to me. Further, although the authors highlight the critical issue of arising from a poorly defined and poorly documented notion of scale and set out solid guidelines for future studies, they do not explore possibilities of consolidating extant research that are based on data collected at different scales. For instance, to understand how data at different scales may be combined (perhaps by developing thresholds or procedures to scale-up) is crucial for ecologists/neoecologist to be able to take advantage of findings in paleoecology. Bennington et al. (2009) writes:

“The greatest barrier to communicating and collaborating with neoecologists is not that data collected from extant ecosystems are necessarily different or more complete than paleoecological data but, rather, that these two data sets commonly represent or are collected at different scales. If such differences of scale can be understood and quantified, then they can be reconciled and even exploited. This will allow neoecological studies to inform the interpretation of patterns and processes in the fossil record and will permit the use of paleoecological studies to test how ecological and environmental processes have structured the biosphere over extended time intervals (National Research Council, 2005)”

Would Dungan et al. believe such consolidation of data to be possible? What rules should be followed? I would have liked to see the authors discuss the level of interactions between two scales or the “openness” of natural systems. Process at one scale may be highly sensitive to changes occurring in a higher scale but unaffected by processes at lower scales. How finely should higher/lower scale be identified; by magnitudes of 2 or magnitudes of 4? Should the proposed guidelines be the same for phenomena that are highly open? Are there critical points at which an open system stops interacting with higher or lower scales? If not, then do natural scales even exist? Or is it only a social construction  to aid our own understanding?

Bennington et al. (2009). Critical Issues of Scale in Paleoecology. Palaios, 24(1), pp. 1-4.


Dungan et al. and Scale

Sunday, February 26th, 2012

Dungan et al. (2002) explicitly define various terms related to scale. They offer a statistical approach to demonstrate that changes in the size of sampling or analysis units can affect detection of a phenomena.

            The authors’ emphasis that the issue of scale is one of choosing the correct unit size is an important one. As geographers, we may take this distinction for granted as we may know through procedures like georeferencing that an acceptable Root Mean Square Error depends on the map’s scale, rather than aiming for the lowest RMSE. It can be difficult to convey to people from other walks of life that the best answer is not the most precise, but that it depends. Coming from a statistical approach, this is often the case when trying to train classifiers for remote sensing. Having a finely tuned sample spectral signature can result in overfitting, or a ‘pure’ sample being non-representative of the heterogeneity of units of the same kind. This overfitting may be statistically accurate and good, but produces results that are nonsensical in reality.

            The issue of the MAUP is discussed and geostatistical methods considered. It is assumed that a full count or census of the extent is the control method and captures all significant patterns. If this is the case, I wonder if increased computational speed, data, and the general realm of data mining (or technological advances from geospatial cyberinfrastructure – parallel computing) can avoid the MAUP. This exploratory method need only consider a large extent to find all the patterns within it (arbitrary large extents are easy to choose). Does having all the data and being able to compute it all negate the need to consider appropriate sample unit sizes?


Location Based Services (LBS) and Context

Sunday, February 26th, 2012

Jiang and Yao’s (2006) paper discusses LBS as geographic data and services offered through mobile networks and the Internet to handheld devices and traditional terminals. They bring up major issues in LBS including context-based modeling, and conclude with this interesting line:

“The boundary [between GIS and LBS] could be even more blurry in the future    when conventional GIS advances to invisible GIS in which GIS functionalities are embedded in tiny sensors and microprocessors”. – Jiang and Yao, 2006

This line implies in part a passivity by the general public in determining what LB-information is served to them. Granted, users have indirect input in the form of how often they visit or search specific websites (influencing how algorithms determine your preferences), but the automating of deciding what to show can hinder geographic understanding. LBS has significant power in conditioning our spatial cognition (e.g. people viewing cities as gridded and ruled by roads in North America thanks to Google Maps). The authors describe context-based modeling as a hierarchical categorization of the environment that is updated on-the-fly. It would be interesting to allow the user to assign priority to specific features of a context to optimize the use of limited computing resources; a billboard advertiser may be more interested in up-to-date information on how often busses pass by his ad than the amount of foot traffic on the same street. This also introduces active decision-making by users and is probably more practical as a blend of intelligent human input and technical ability of computers for context-based modeling. My main concern is that sensors typically provide point-specific data for location, and struggle with describing the space around them. Such a view can lead to tunnel-visioning or reductionism into Start-Point, End-Point. Incorporating context is needed for understanding the spatial interrelationships of features.

As a side note, the extent to which users can search for specific things is likely to increase exponentially as our world becomes increasingly sensor-filled. This brings up the debate of how to appropriately restrict and limit access to LBS.   

– Madskiier_JWong

Underground spatial orientation

Friday, February 17th, 2012

When reading the article, I contemplated the similarities and differences among the three spaces of spatial cognition discussed by Tversky et al. (1999). Last week’s geovisualization lecture, particularly Harry Beck’s 1933 London underground subway map came to mind; how one’s navigation alters once you head down the stairs to the sub-terrestrial world. It is perplexing to ponder that our spatial cognitive spaces are synched and utilized to shift from one environment to another. It feels as though space ceases to exist once I enter a tunnel. Yet, due to signs and landmarks, the destination is eventually reached. In this case, the idea of North is discarded. We solely rely on signs already made, or past experience.

In contrast to Harry Beck, David Shrigley’s London’s underground subway map is an interesting representation of our mind space once we enter the subway system. It is an homage to Beck’s standard map, signifying transcending from confusion to clarity. The space of navigation, the space surrounding the body, and the space of the body in an underground subway setting appear to be more restricted than other environments. This space restriction is produced by existing infrastructure, which limits one’s freedom of exploration. Although I will note that a similar argument could be made with sidewalks. The difference between the subway scenario and the sidewalk scenario, however, is that we are visually restricted. Vancouver’s mountains are not visibly constrained and cannot act as a compass, leading us North.

-henry miller

GCI: Quality over quantity

Friday, February 17th, 2012

Yang et al. (2010) have helped clarify the complexity of Geospatial Cybernetic Infrastructure (GCI). However, the field covers so much ground that, at times, I found it difficult to grasp. The definition and scope of GCI is very, very ambitious: to utilize data resources, network protocols, computing platforms, and services that create communities. These communities comprise of people, information, and technological tools, which are then brought together to “perform science or other data-rich applications in this information-driven world” (264). Furthermore, the objectives are vast, where responsibility is placed on many variables: social heterogeneity, data analysis, semantic web, geospatial middleware, citizen-based sciences, geospatial cloud computing, collaboration initiatives. With so much going on simultaneously, it should not come as a surprise that organization is one of the main challenges in GCI. Perhaps covering less ground may lead to higher quality progress.

Despite the many obstacles that GCI must overcome, the advancement of Location-Based Services (LBS) (especially mobile technology) and digital earths have shown the potential for GCI. They are largely ubiquitous due to their user-friendly interfaces. Along with such developments, the attractive end-user interface component is significant. However, should primarily be informative, not just pretty. “The geospatial Semantic Web is a vision in which locations and LBS are fully understood by machines” (268). I believe this vision should be extended to humans also understanding (as close to “fully” as possible) the meaning of the geospatial Semantic Web.

Qwiki is a platform that represents both the semantic web and information processing. A combination of intelligent agent (primary) and human participation (secondary), it is a dynamic, visually emphasized version of Wikipedia. Here we have a conglomeration of different areas, supporting the multi-disciplinary aspect that GCI aims in representing and also the challenge of “how to best present data and information from multiple resources” (268). Qwiki has the potential to help organize enormous amounts of geospatial data from different domains, resources, applications, and cultural backgrounds. That is, if the data becomes digitized. Even though I advocate for quality, I believe quantity in terms of data organization is key as it is the first step towards knowledge building: data to information to knowledge. Organized data, in turn, prepares for advances in other areas of GCI to meet the proposed objectives.

-henry miller

Mash-ups and Interoperability

Friday, February 17th, 2012

Under the section title Enabling Technologies: System Integration Architectures of the Yang et al. paper, mashups and plug-and-play techniques and codes are presented as a fairly simple but wonderful thing in a very positive light. Words such as “ideal”, “easily integrated”, “foundation” and “new” assist in creating this sentiment which to me, seems to downplay some of the serious issues of heterogeneity of components and interoperability. These issues are by no means solved but it is true some advances have been made in increasing interoperability between platforms and systems. One need only look at the multitude of APIs that can be plugged into a webpage to know some interoperability is taking place. Nonetheless, as discussed last class in regard to geovisualization, there remains a large amount of heterogeneity in data, in software and different GCIs. It cannot be forgotten or emphasized enough that regardless of how far we have come in managing some of this data, there will always be more of it in many forms and that new forms of data will continue to emerge.


-Outdoor Addict


How does one create mental maps with Google Street View to Google Maps?

Friday, February 17th, 2012

Tversky et al. discuss the concept of making a mental map of space around the body whereby a person seems to have three essential axes of reference from their body. From these, the individual then identifies objects more quickly in relation to first the head to feet axis, then the back to front axis and finally the left to right axis. This is the Spatial Framework Theory and was the best theory to explain the observed response times of participants in the study.

What I would like to know is how this theory holds up when the study is performed not by reading about one’s surroundings and then answering questions but by using images to form the reference frame. I say this while thinking of Google Street View where you can see what surrounds you in all directions and build a mental map of what is located around your virtual location. However, if a researcher were to perform the same experiment as above, where you look in one direction first and create your frame of reference to that, how easy is it to then turn in a different direction and answer questions about what is around you using directional terms? Since the head/feet axis is not present, would the left/right axis have faster response times than the back/front axis or would it remain as in the first study?

Additionally, if you were to go to a random (urban) location with Google Street View, how would you mentally situate your understanding of your location in a larger context? How do you take the image you made of the area around you from one place on a street to the next zoom out when you no longer see the visual features on the street, just the street name? This may be an issue with Google visualization that you lose a sense of the direction in which you were looking when you zoom out of the street view but personally it is confusing to try to make sense of what I was looking at in a particular direction and then to lose that directionality when zooming out. In relation to spatial cognition, I wonder how my brain (maybe just mine if no-one else has ever noticed this) loses the mental map it had of that streetscape when the viewpoint is changed from in the scene to top down with straight lines and street names. Shouldn’t it be easier to visualize the street with straight lines and names if the brain schematizes the street view into these things (“nodes and links, landmarks and the paths among them, elements and their spatial relations” (517)) already? If it is an issue of the different perspectives, how can Google work to reduce this confusion with knowledge of these spatial cognition findings?

-Outdoor Addict

GCI and blindness

Friday, February 17th, 2012

The kind of power in the type of GCI we can expect in the future is hard to grasp. Sensors automatically collect petabytes of real-time streaming data that gets send to computers, which harness the computational power of grid computing. Despite the large amounts of data, access and retrieval of information would be easy because computers, being equipped with proper semantics would know what we wanted. We would be able to deal with the most complex issues through multidimensional analysis and achieving excellent interoperability between applications. On one hand, this future is exciting, but on the other, it is also a little daunting to think about how complex systems will become and how well people will be equipped with the ability to question machine outputs.

In part, this is an extension to ClimateNYC’s post. I too am concerned about the opacity of GCIs. Will continuous data feeds really make the world more understandable? The shift in the way science will be done with GCIs will have to be accompanied by an equally educated population, which should include end users as much as developers of the technologies involved. Otherwise, researchers using this technology will not even stand a chance to question the results he obtains. Users must be able to benefit from the amazing potentials of GCI as well as be able to consistently negotiate its terms of developments and the mechanics behind the technologies. The idea of GIC as a black box is a scary one; if we accept it without question, technology (sensors, applications, computational analysis) will be a veil between us and natural phenomena. If we lose the ability to questioning the outcomes we obtain from machines, we will be dominated by technology without even knowing it. Therefore, instead of trying to “relieve scientists and decision makers of the technical details of a GCI”, I believe the opposite must be true. Educating the greater population of the mechanism behind such complex systems is necessary if we do not want to go blind.

– Ally_Nash

Spatial Cognition and Semantics

Friday, February 17th, 2012

To understand geography and turn geographical observations into knowledge and meaning we need to grasp how we (and our body) form spatial relationships with the Earth. This is where spatial cognition can help us. I thought the 3 types of spaces the authors described in the article are very interest and relevant to semantics and ontology building. I am especially intrigued with the fact that our body is our first compass. This makes sense because our body is what gives us physical form and thus allows us to interact spatially with other physical entities, which in turn, is why we care about geography at all. If the way we understand our surroundings begins with our bodies, then the experiences our body has with physical entities must play a part in how we talk about it. For instance, maybe the reason why different cultures use different propositions to describe the same action (e.g. “across the lake” as “go over the lake” or “pass through the lake”) stems from the different experiences which subjects the body into different positioning with respect to the lake. We use different words because the way we understand the world is different depending on type of space we are using. Or in other words, “in each case, schematization reflects the typical kinds of interactions that human beings have with their surroundings” (522).

– Ally_Nash

What can’t GCIs do?

Friday, February 17th, 2012

Beginning reading with no prior awareness of cyberinfrastructures, Yang et al.’s article on Geospatial Cyberinfrastructures was pretty overwhelming. Yang et al. do such an incredible job of condensing huge amounts of information in a way that is fairly easy to follow (despite the multitude of acronyms) that admittedly, I don’t even know where to start in tackling it. What I found most interesting was the outline of the development of the semantic web and the data life cycle.

Throughout all readings in this course so far there has been mention of semantic differences in data, and the need to “facilitate the automatic identification, utilization, and integration of datasets into operational systems,” (272). With GCIs encompassing data from a huge array of different sources and different users (the Virtual Organisations are also really neat), the development of Web 3.0 is incredibly pressing in order to make sense of all this data and ensure interoperability.

I also really liked the section on Supporting the life cycle from data to knowledge. It is important to note that data is not information is not knowledge—it must be processed and synthesised in order to achieve a greater understanding of what the data represents.

Readings like Yang et al. really send home the point that this field is overarching and is growing at an incredible rate, and it’s really exciting to watch.

-sidewalk ballet


Cyber-infrastructure and Uncertainty

Thursday, February 16th, 2012

Chaowei Yang et al. very ambitiously discuss the development of geospatial cyberinfrastructure, including some of the challenges confronting this process. One of the aspects I found most interesting was the potential for increasing amounts of error being introduced as more data are generated by an ever growing number of users. The facilitation of a system, which can “collect, archive, share, analyze, visualize, and simulate data, information, and knowledge” increases the accessibility of data to a much wider array of people. While this is beneficial in terms of promoting research, it also, however, allows for a great deal of uncertainty to be introduced as there are are no clear standards for communicating this inherent component of data. Users not familiar with this notion – who are likely also those increasingly gaining access to this infrastructure – may further this problem.

Since the quality of this data may be questionable, ClimateNYC equates the development of GCIs to black boxes, and I think this has severe implications for the future of GIS. Madskiier_JWong, conversely, argues that scientists have much to gain from being able to easily share data with people in other fields, but I would be cautious with this. I am not questioning the notion that sharing data facilitates the production of knowledge. I am, however, concerned that if error and uncertainty are significantly present and not well-communicated, it can lead to severe divides and unnecessary arguments within fields of study. We all know how easily maps and data can be manipulated, for example, to convince a viewer of a point of view, so perhaps issues such as communicating error need to be better addressed as cyberinfrastructures are developed. From this, perhaps data will not only by more freely available, but it will also be more reliable.

– jeremy

Those silly mountains…

Thursday, February 16th, 2012

I can relate to Andrew’s comment about Montrealers accepting a false north, which I find very interesting. In Vancouver, everyone knows that the local mountains are north, however, they aren’t actually. Despite this, it’s close enough for the purposes of traveling around the city and this is an important aspect of mental maps. By no means do I intend to flare up the ‘what is a mountain’ debate, but if a person’s mental map incorrectly associates a geographic feature with a compass reading in order to improve their navigational abilities, what does that say about the accuracy of their mental map?

Perhaps, as Tversky et al. illustrate, this notion reflects upon the nature of how we schematize, a process which accepts a loss of detail to “allow for efficient memory storage, rapid inference-making and integration from multiple media.” On the other hand, there may be more to this issue, as our ability to incorporate an individual’s cognitive map into a GIS is another problem that arises. How can we display and compare the landmarks, nodes, paths, etc. of cognitive maps when they are all, for example, represented at varying scales? To go back a step, how can we even be sure that the process of drawing a mental map isn’t completely fraught with error? These ideas relate to the varying ontologies that exist and trying to reconcile the differences between them, which – as we all know – is an extremely complicated task.

On a somewhat related note, everyone who is a map lover/artist (which I’m sure all of you are) should check out:

– jeremy

Tversky et al and Micro-Spatial Cognition

Thursday, February 16th, 2012

Tversky et al.’s article on spatial cognition categorizes our understanding of space into three inter-related categories: the outside, navigable world, the space around our bodies, and the space consisting of our bodies. At an individual level, it is axiomatically clear that we do not conceive of ourselves or our surroundings as a 2 dimensional space. It is also interesting to examine how detailed spatial cognition of our body can enable better movement physically.

                There are clear examples of the disconnect between our cognition of our body and representing it in a two dimensional form. Ask any person of the difficulty they had in trying to draw the hand of a person holding an object without a visual reference. This extends beyond the technical expertise of being able to draw the fingers in perspective, since it is difficult to simply imagine how the fingers position themselves in relation to each in two dimensions while remaining ‘realistic’ to our minds. It can be inferred that we cognize space of our bodies and our nearby surroundings in three dimensions (and thus rich in detail) because we have so much experience in these local matters. Does this raise an implication then of our 2-D conceptualizations of the ‘far outside world’ as being relatively poor in detail? By extension, do our two-dimensional maps reinforce a poorly performing cognition of space?

                On another point, I would argue that athletes have a heightened awareness of the space of their surroundings and of their body. The important functional relevance that the authors identified is likely to be stronger and more varied with athletes, since they have a more frequent and wider range of motion. Speaking from personal experience (I do martial arts), being able to mentally picture where my feet will land, where my arms must move, and how much space I require has helped me execute complex moves. Being able to orient yourself mentally is key in sports such as figure skating, and allows people to move better. Geography is in your body!

–  Madskiier_JWong

A Model for Mental Mapping?

Thursday, February 16th, 2012

Tversky et al’s explanation of mental spaces as “built around frameworks consisting of elements and the relations among them,” (516) reminds me of an entity relationship model. The mental framework we have could consist of:

– Entities in line with Lynch’s city elements, and touched on in the Space of Navigation

  • Paths
  • Edges
  • Districts
  • Nodes
  • Landmarks

– Relationships to associate meaning between entities

  • Paths leading to landmarks
  • Edges surrounding districts

–  Attributes distinguishing the characteristics of an entity

  • Significance of a landmark
  • Width of a path (maybe depicting how frequently it is used for travel opposed to actual width)

I agree with other posts that this article needed a greater theoretical grounding within GIS. I struggle to see what cognitive maps can be used for, but with this simplified schema in mind, can we translate these cognitive maps into usable data in a GIS? Maybe, but I think we would have to be very meticulous to grasp the nuances in spatial perception and cognition, and therefore the relationships between entities.

Cognitive mapping methodology stresses the importance of debriefing after the maps are made. Discussions must be held in order to begin to establish reasoning regarding why what things are placed in certain locations, why things are deemed to have greater importance, etc. I don’t think that a simply digitized cognitive map will serve much purpose (as a pedagogical tool or otherwise) without knowing the meaning behind it. Each user will have different experiences leading them to perceive different things—things that I don’t think we can make much sense of without dealing with the nitty-gritty relationships of entities.

-sidewalk ballet

The Space of Navigation

Thursday, February 16th, 2012

The Space of Navigation
Effects of the different elements such as frame of reference (the hierarchical encoding example) were all very interesting. What would be interesting to know though if certain of these elements are more dominant at certain scales. If we could identify that everyone thought at the provincial scale in the hierarchical example’s way, then we could develop a framework on how to communicate and teach people geographic information.

One problem I had with the article though, was that it was very much from a cog. sci. (or similar) approach, rather than a geography/geospatial view. What effect do the statements on our mental representations of space have on how we should do things in the future? Are the different biases observed in the different studies necessarily a bad thing? Should we shape our data output, such as local maps, to meet these methods of storing information in the brain, or should we stick to something that is as ‘accurate’ to being a representation of the ground surface as possible? At the very least, more knowledge on how humans perceive their environment should help us determine what we are doing wrong when presenting geographic information, an area that would have been interesting to see in this article.


GCI – a system of systems

Thursday, February 16th, 2012

Sometimes I get the feeling that people view GCI as a single entity/unit. It isn’t. It is, too misquote Yang, a ‘system of systems’. This makes GCIs seem very flexible and able to do anything – as improvements in any of the systems will advance the GCI forward. However, the challenges are still immense. GCIs may be used as a way of sharing, analysing, storing data etc., but they are still limited by the rules we have, such as the semantic framework for sharing of data. However, this may make a GCI start to look a little cumbersome, at least when you are viewing it as a single entity. This is something I am not sure of – whether GCIs are adaptive to changing environments such as ontologies.

Future changes were also interesting. Virtual Organisations could start becoming more permanent as enabling technologies decrease physical limitations. GCIs though are still relatively closed environments though, and may benefit from more open sharing. This is what is expected to happen with the shift to ‘geospatial cloud computing’. However, the article doesn’t really define geospatial cloud computing – what’s the difference? Aren’t we already partly there?



Someone Call INTERINTERPOL! I’m Being Boarded by Pirates!

Thursday, February 16th, 2012

Cyberinfrastructures are a little outside of my comfort zone so I’m not sure I completely understand how they work, but I like how Yang places it in real world context.

It seems like the purpose of a cyberinfrastructure is to get more done, in less time, in more than one place. Great concept right? I agree that this sounds great, but I agree with some of the criticisms that Yang brings up. What if the type of research that you are performing is illegal in some countries, but not in others? The other day at work I heard some pretty well-known DJ’s talking about piracy. The first DJ was concerned that he would be fined for having pirated material on his computer. The other DJ brushed it off and stated that he would simply move his server to “somewhere like Thailand or Botswana” where it isn’t illegal. What happens if a computer that stores some of that information where it is illegal? Does the person responsible for the content get charged back home even a said law does not exist in that country, or does he have to be extradited first? Is the country where it isn’t illegal have to anything at all? I personally don’t think so, but it brings up some valid concerns. Will an international cyber-law enforcement come to fruition at some point soon? INTERINTERPOL (International Internet Criminal Police Organization) maybe.

On another note, but still somewhat related to international frontiers, I like how Yang continues the debate regarding ontologies. What becomes an official ontological language? How are we supposed to accommodate foreign languages in the web 3.0? Will many key words and ontological definitions from several languages all be linked to one parcel of data labelled “10010101” for example?

I don’t know much about programming or cyberinfrastructures but I get the impression that the issues that Yang brings to the surface are increasingly important. I don’t think that the solutions are impossible, I’m simply curious to know how the laws form around cloud servers and cyberinfrastructures.



Using GCI Without Thinking

Thursday, February 16th, 2012

Chaowei Yang and the other authors of “Geospatial Cyberinfrastructure: Past, Present and Future” believe that the evolution of GCI “will produce platforms for geospatial science domains and communities to better conduct research and development and to better collect data, access data, analyze data, model and simulate phenomena, visualize data and information, and produce knowledge” (264). However, to borrow from Bruno Latour in Science in Action, you can’t help but wonder how much of all of this might just end up being a black box for many disciplines that utilize geospatial data but don’t question how it’s presented and processed. Could this be the quietest revolution in GIS?

The idea of GCIs as black boxes should come as no surprise. Large numbers of people utilize technology that “brings people, information, and computational tools together to perform science” without questioning the underlying “data resources, network protocols, computing platforms and computational services” (267) that help them attain their goals. By using the term black box, I emphasize the meaning Latour intends that it serves a concept or purpose that most people don’t investigate beyond accepting that those in the GCI field have questioned it and made sure it functions.

While I agree with SAH about the exciting potential “a large infrastructure supporting free global geospatial data” holds, I wonder how many people utilizing this network will truly appreciate it. A great deal of people working in academia no doubt. However, users who don’t possess such an academic background or connections to this community might also interact with and contribute to this data source even as GCI remains a black box. While the democratic aspects of this are exciting, I also wonder how we might filter so much data and use it most productively (and ensure its accuracy) in light of the author’s questions about how best to deal with real time data.


Spatial Cognition and the Elastic Brain

Thursday, February 16th, 2012

Last week I stepped out of the St. Joseph metro station at a stop I don’t normally exit. I gathered my bearings by assessing what surrounded me and began to walk to my final destination. After a second, I stopped to check the map on phone (a reaction I now have—my iPhone apparently rules my life) and realized I was walking in the wrong direction. “Thank iPhone!” I thought to myself, and plugged my headphones in and began the trek towards Saint Laurent. After 15 minutes I stopped and realized I had been walking in the wrong direction. I was right(or left??) all along!

I checked my phone and realized that with the new update, my phone now orients the Google maps in relation to where I’m pointing. Assuming that the map on my phone pointed north, when I was actually facing south, I ended up guiding myself in the wrong direction.

I really enjoyed the article about spatial cognition. It’s fascinating how we instinctively orient our world “North.” I believe, though, that this instinct is not actually instinct. I believe that it’s a product of our upbringing. Montrealer North is never actually north. Collectively, Montrealers accept this false north; we are however aware that our conception of North is in reality more north–west. The human brain is extremely elastic! It has the power to re-orientate itself after wearing “inverted goggles” (perceptual adaptation) and has the power to re-wire language and thought after a stroke ( I imagine that no matter the convention, the brain has the ability to adapt to such changes.

If only it could turn off the compass on my iPhone…