Archive for the ‘geographic information systems’ Category

The development potential of LBS

Thursday, March 1st, 2012

I liked the article’s overview of LBS – it it consists of, how it’s different from a regular GIS, and what kind of data analysis can be done. There is also a good overview of the issues with using LBS, such as interoperability. Interoperability, I think, is even more important than emphasized in the paper. Although location-based services aren’t restricted to expensive hi-end devices like smartphones (the article doesn’t even explicitly mention LBS in the cellular phone market), it is still fact that certain kinds of phones can benefit more from LBS (i.e. smartphones) than other phones (feature phones + ‘dumb’ phones). This brings to mind a video I just watched where Eric Schmidt of Google gave his views on future developments in internet and telecoms at a couple days ago (not actually directly relevant to LBS). However, he made it clear that, while there are many users with fancy smartphones out there, there are still 5 billion without them, or running on older generation hardware (and networks). I think this is another factor that is holding back the development of LBS. The user base may be large, but still not homogenous, as there is a whole range of devices out there (2G to 4G/LTE). Eric Schmidt gave his opinion that the divide between those that have the cutting edge of devices, and those that don’t, will persist for quite a while longer. If this is the case, LBS will have a tough time being deployed globally, as developers will have to try to design their systems for many different devices, with different operating systems, different processing power, and different capabilities. It may be the case that LBSes for mobile phones will have to be split into hardware specific categories, but since this hardware availability varies with geographic location, there will be a large portion of the world where a certain service will be unavailable. In the final part of his talk, Eric Schmidt answered questions, in which he stated something along the lines of ‘the smartphone of today is the feature phone of tomorrow’. It certainly seems the case in the mobile phone market where certain features are becoming more commonplace, and processing power and memory is constantly increasing. If this is the case throughout the world, then we should be optimistic for the spread of LBS throughout the world.

 

-Peck

Scales in spatial statistical analysis: other definitions, other fields

Thursday, March 1st, 2012

Dungan et al. (2002) are detailed and clear in presenting scales in the field of ecology. Observation scales, scales of ecological phenomena, and scales used in spatial statistical analysis are thoroughly explained, along with their limitations. The three categories that can utilize spatial scale are the studied phenomenon, “the spatial units or sampling units used to acquire information about the phenomenon, and the analysis of the data” (627). When addressing the definitions of phenomena, observations and analysis, we should note that “some of these definitions overlap one another or are ambiguous” (629). In particular, how would be go about determining explicit definitions? Given one of the examples in the article, what would be an explicit definition of grain? The article could have mentioned ways to gain a consensus on the aforementioned definitions. However, the authors do raise awareness of issues regarding the role of scale in spatial statistical analysis scale that have been ignored by the literature, and note that “resolution involves more than observation grain alone” (630).They further state ecologists wrongly utilize scale terminology when applying large scale use to large phenomena and small scale use to small phenomena, observations or analysis. Dungan et al.’s solution is to replace the word ‘scale’ with ‘extent’. Will such changes affect ecologist’s “arbitrary decisions” in their selection of sampling and analysis units? (638)

While the authors do indeed provide a balanced view of scale in spatial statistical analysis by delineating its advantages and limitations, I am curious about scale’s effect on other fields, beyond ecology. Dungan et al. mention that “many ecological attributes can be expected to average linearly…” (631). Although the linear outcome may work for ecologists, how will other fields that will have attributes that will result in non-linear outcomes? How will the data be analyzed? What will be the impact of the modifiable areal unit problem (MAUP)? Outside the field of ecology, complex networks are moving towards the direction of escaping the limitations of scale, where the generative models created aim to comprise of scale-free networks.

-henry miller

LBS, compatibility, and user-friendliness

Thursday, March 1st, 2012

One of the aspects of the article that I found to be most interesting (and relates to my GIScience topic of error and uncertainty) is the mis-matching of geospatial data collected by various individuals or agencies. This also relates to the lecture on spatial cognition, as the data being generated by native and non-native users is greatly influenced by the ways in which spatial knowledge has been gained, whether consciously or sub-consciously. In order to foster LBS activities such as predicting locations, this information is likely to be required to be compatible, which seems like just as challenging of a task as creating universal ontologies.

Catering LBS to the needs of various users is also an interesting and challenging subject, especially as applications and platforms are hindered by features such as small cell phone screens. For various applications, for example, the article notes that a wide array of layers and sources are needed to provide the required information. Also challenging is deciding how to model this information in a user-friendly manner. The article notes that including landmarks, for instance, may be more beneficial than information such as street names. As has been noted in previous posts, the notion of differing needs with regards to presenting information on a screen is also imminent when designing systems for disabled individuals. Since even using a map-based application may be difficult, text-based descriptions may be required instead.

As a final note, Jiang et al. discussed combining the functionality of geometric and symbolic models to include the advantages of both in an LBS. Perhaps this idea is similar to designing road signs, for example, where efforts are made to allow those who may not speak the native language or are illiterate to be able to navigate their way. Like the article notes, no assumptions can be made about a user’s prior knowledge of GIS or spatial environments, which may include vey basic notions such as literacy. As GIS students, it is easy for us to overlook or take for granted the knowledge we have gained through our education, so being able to understand the needs of others will certainly be a challenge.

– jeremy

LBS and Naive Users (A.K.A. Me)

Thursday, March 1st, 2012

I must say I appreciated Bin Jiang and Xiaobai Yao’s article “Location-based services and GIS in perspective” a great deal for the myriad ways it helped to explain LBS technology in light of GIS science’s research agenda, particularly given how ubiquitous they are in our everyday lives right now. The key section, to me at least, is where the authors argue that these technologies tend to be “generally oriented to naive users” (719) because potentially everyone might be a user some day. In a nutshell, that naive user is me but with one important caveat. I do not own an IPhone, tablet, IPad or any other generally accepted form of LBS technology. While I’d like to think I’m relatively sophisticated in using modern, online technology, I simply can’t bring myself to buy any kind of tablet because I’m not able to distinguish how my using it would be different from using my computer. Generally, as cell phones go, I’m that guy who walks into the store and demands the cheapest, most-unbreakable phone I can get. Perhaps I’m old, but a phone should be a phone and nothing more, by my way of thinking.

So I found this paradigm of the naive user engaging with LBS technology particularly interesting when the authors got into discussing how research into “spatial ontologies”  and “geographic representation” could be closely tied into work on LBS platforms. The authors approach it from the perspective that such research can help to “set up a common ontology for LBS for knowledge sharing among diverse users” (718). This might be one direction such a flow could be viewed: previously developed ontologies of geographic space shaping the manner in which LBS networks/devices display such information. But, I would think such a flow might move in the opposite direction too, in that many LBS users might influence definitions of geographic space according to how they use their devices. As the authors note, aspects of spatial cognition will be very important to LBS device design (719). Or, put simply, naive folks like me will want simple ontological definitions so they can understand/use these devices better.

But, let’s remember to put this in perspective. Not everyone uses these devices the same way and people like me have taken themselves out of the game entirely. So, how do designers define ontologies that fit all of the diverse users around the globe? I know interoperability remains an important idea as we discussed with Renee’s talk about ontologies, but at what cost? Take this example: A little while ago, a friend took me on a kayaking trip around the Boston, MA harbor islands. He did not bring a map. After a long day, we found ourselves still on the water in the dark searching for the island where we could camp. We knew we were close but his IPhone was on the blink – at least as far as its star charts, GIS, and map technologies were concerned. Needless to say, he was not pleased. For my part, I found it amusing he thought such devices would work on the ocean (albeit still within 5 miles of shore).

Perhaps just a technological infrastructure issue – but the point is still the same. If we’re thinking about defining standards for the information these devices display, what happens if our standards disenfranchise kayakers? More to the point, what about users in Africa who find landmarks such as a neighbor’s field more useful than street grids with names? The authors touch on this idea, but how do we allow naive users to generate data and give input on the ways these devices work as they become yet even more commonplace across the globe.

-ClimateNYC

DISCLAIMER: My parents do both own complicated, new-fangled cell phones that allow many of these LBS functions. And, yes, I have used them many times and helped my parents figure out how to use them – since I somehow am a bit more adept than they.

 

 

Clarify “Scale” in Different Research Domain

Thursday, March 1st, 2012

In the paper of Dungan et al. 2002, the definition of the terminology “scale” is examined in spatial research domains. They explore “scale” with the phenomenon being studied, the spatial unit or sampling unit, and data analysis. Within different research domains, they find different synonyms for “scale”, including extent, gain, resolution, lag, support and cartographic ratio. Case studies are provided to illustrate different definition of “scale” in different research topics. Modifiable Area Unit Problem (MAUP) is identified, and authors present several suggestions to avoid it.

Most of the examples in this paper come from ecology studies, so the diversity of “scale” is not fully explored. They have mentioned “scale” in remote sensing, and refer it as the synonym of “resolution”. But “resolution” in remote sensing is involved with spatial resolution, spectral resolution and temporal resolution. In image data analysis, the word “scale” is more often utilized as statistical scale, which is related to the analysis unit rather than the observational or sampling unit. For geospatial database design and implementation, the word “scale”, or “large-scale” have significantly different meaning. The large scale data do not only mean huge volume, but also heterogeneity (e.g., different spectral and spatiotemporal resolution) and complexity (e.g., data with different format, noisy rate, and distributed storage) as well. Therefore, I agree with the authors in this paper, that “scale” should be specified with respect to the context that it is used.

Different scales give us different approaches to study our targets. By changing the scale, we actually change our methodology and observation methods. Therefore, more attention should be given to “scale “itself, not the definition.

–cyberinfrastructure

Evolution and Emergence of LBS

Wednesday, February 29th, 2012

The challenges of LBS are incredibly interesting, and seem to me to encompass what many of the challenges within GIS are today: the limitations of the hardware, and the limitations of the user.  I particularly liked the term “naive user”, implying to me not just that the user doesn’t understand, but that they are adaptable to the technology available.  This coincides with the idea that context is important for LBS because of how the data is displayed and how people are taking it in.  The language, the visualizations—user interface seems like a highly evolving and necessarily important field.

Previously in class we were discussing how maps are evolving to meet the needs of users, as opposed to having users bend to the will of the map, so to speak.  I see LBS as a form of Maps of the 21st Century.  Constantly evolving with, contextualizing, re-contextualizing, adapting, and shaping the world and users around it, LBS takes the qualitative data and attempts to reinterpret it in a manner accessible and useful to many users.  However, I do agree with Madskiier_JWong when they suggest the user is in many cases passive—while technologies are working to evolve to needs of the user, it would appear that on-the-ground, the user is in many cases taking what is being provided.  It will be interesting to see how the technology evolves to incorporate real-time demands of multiple users and presents them uniquely to the variety of consumers.

Looking into LBS it seems that while an upcoming field, in practical application for the everyday user, it is still quite new, and just beginning to catch on with people, with regards to programs such as foursquare, the friend finder for mobile phones.  To me, this lack of immediate uptake on some forms of LBS references another important limitation the authors spoke of, that of privacy.  And not even necessarily that people can see where you are going and collect information from you, but that it is being built into devices where the default is to track your movements—it is not necessarily something you must seek out yourself.  I think as it gains popularity, however, the privacy issue will come to the forefront, and like with the internet, users will become more aware of their rights, how to properly protect their privacy, and where to draw the line.

sah

Dungan et al. on Scale

Wednesday, February 29th, 2012

I thought Dungan et al did an excellent job demystifying the concept by clearly defining the various terms that scale has been related to. However, the idea and significance of “support” is still very unclear to me. Further, although the authors highlight the critical issue of arising from a poorly defined and poorly documented notion of scale and set out solid guidelines for future studies, they do not explore possibilities of consolidating extant research that are based on data collected at different scales. For instance, to understand how data at different scales may be combined (perhaps by developing thresholds or procedures to scale-up) is crucial for ecologists/neoecologist to be able to take advantage of findings in paleoecology. Bennington et al. (2009) writes:

“The greatest barrier to communicating and collaborating with neoecologists is not that data collected from extant ecosystems are necessarily different or more complete than paleoecological data but, rather, that these two data sets commonly represent or are collected at different scales. If such differences of scale can be understood and quantified, then they can be reconciled and even exploited. This will allow neoecological studies to inform the interpretation of patterns and processes in the fossil record and will permit the use of paleoecological studies to test how ecological and environmental processes have structured the biosphere over extended time intervals (National Research Council, 2005)”

Would Dungan et al. believe such consolidation of data to be possible? What rules should be followed? I would have liked to see the authors discuss the level of interactions between two scales or the “openness” of natural systems. Process at one scale may be highly sensitive to changes occurring in a higher scale but unaffected by processes at lower scales. How finely should higher/lower scale be identified; by magnitudes of 2 or magnitudes of 4? Should the proposed guidelines be the same for phenomena that are highly open? Are there critical points at which an open system stops interacting with higher or lower scales? If not, then do natural scales even exist? Or is it only a social construction  to aid our own understanding?

Bennington et al. (2009). Critical Issues of Scale in Paleoecology. Palaios, 24(1), pp. 1-4.

Ally_Nash

Dungan et al. and Scale

Sunday, February 26th, 2012

Dungan et al. (2002) explicitly define various terms related to scale. They offer a statistical approach to demonstrate that changes in the size of sampling or analysis units can affect detection of a phenomena.

            The authors’ emphasis that the issue of scale is one of choosing the correct unit size is an important one. As geographers, we may take this distinction for granted as we may know through procedures like georeferencing that an acceptable Root Mean Square Error depends on the map’s scale, rather than aiming for the lowest RMSE. It can be difficult to convey to people from other walks of life that the best answer is not the most precise, but that it depends. Coming from a statistical approach, this is often the case when trying to train classifiers for remote sensing. Having a finely tuned sample spectral signature can result in overfitting, or a ‘pure’ sample being non-representative of the heterogeneity of units of the same kind. This overfitting may be statistically accurate and good, but produces results that are nonsensical in reality.

            The issue of the MAUP is discussed and geostatistical methods considered. It is assumed that a full count or census of the extent is the control method and captures all significant patterns. If this is the case, I wonder if increased computational speed, data, and the general realm of data mining (or technological advances from geospatial cyberinfrastructure – parallel computing) can avoid the MAUP. This exploratory method need only consider a large extent to find all the patterns within it (arbitrary large extents are easy to choose). Does having all the data and being able to compute it all negate the need to consider appropriate sample unit sizes?

-Madskiier_JWong

Location Based Services (LBS) and Context

Sunday, February 26th, 2012

Jiang and Yao’s (2006) paper discusses LBS as geographic data and services offered through mobile networks and the Internet to handheld devices and traditional terminals. They bring up major issues in LBS including context-based modeling, and conclude with this interesting line:

“The boundary [between GIS and LBS] could be even more blurry in the future    when conventional GIS advances to invisible GIS in which GIS functionalities are embedded in tiny sensors and microprocessors”. – Jiang and Yao, 2006

This line implies in part a passivity by the general public in determining what LB-information is served to them. Granted, users have indirect input in the form of how often they visit or search specific websites (influencing how algorithms determine your preferences), but the automating of deciding what to show can hinder geographic understanding. LBS has significant power in conditioning our spatial cognition (e.g. people viewing cities as gridded and ruled by roads in North America thanks to Google Maps). The authors describe context-based modeling as a hierarchical categorization of the environment that is updated on-the-fly. It would be interesting to allow the user to assign priority to specific features of a context to optimize the use of limited computing resources; a billboard advertiser may be more interested in up-to-date information on how often busses pass by his ad than the amount of foot traffic on the same street. This also introduces active decision-making by users and is probably more practical as a blend of intelligent human input and technical ability of computers for context-based modeling. My main concern is that sensors typically provide point-specific data for location, and struggle with describing the space around them. Such a view can lead to tunnel-visioning or reductionism into Start-Point, End-Point. Incorporating context is needed for understanding the spatial interrelationships of features.

As a side note, the extent to which users can search for specific things is likely to increase exponentially as our world becomes increasingly sensor-filled. This brings up the debate of how to appropriately restrict and limit access to LBS.   

– Madskiier_JWong

Underground spatial orientation

Friday, February 17th, 2012

When reading the article, I contemplated the similarities and differences among the three spaces of spatial cognition discussed by Tversky et al. (1999). Last week’s geovisualization lecture, particularly Harry Beck’s 1933 London underground subway map came to mind; how one’s navigation alters once you head down the stairs to the sub-terrestrial world. It is perplexing to ponder that our spatial cognitive spaces are synched and utilized to shift from one environment to another. It feels as though space ceases to exist once I enter a tunnel. Yet, due to signs and landmarks, the destination is eventually reached. In this case, the idea of North is discarded. We solely rely on signs already made, or past experience.

In contrast to Harry Beck, David Shrigley’s London’s underground subway map is an interesting representation of our mind space once we enter the subway system. It is an homage to Beck’s standard map, signifying transcending from confusion to clarity. The space of navigation, the space surrounding the body, and the space of the body in an underground subway setting appear to be more restricted than other environments. This space restriction is produced by existing infrastructure, which limits one’s freedom of exploration. Although I will note that a similar argument could be made with sidewalks. The difference between the subway scenario and the sidewalk scenario, however, is that we are visually restricted. Vancouver’s mountains are not visibly constrained and cannot act as a compass, leading us North.

-henry miller

GCI: Quality over quantity

Friday, February 17th, 2012

Yang et al. (2010) have helped clarify the complexity of Geospatial Cybernetic Infrastructure (GCI). However, the field covers so much ground that, at times, I found it difficult to grasp. The definition and scope of GCI is very, very ambitious: to utilize data resources, network protocols, computing platforms, and services that create communities. These communities comprise of people, information, and technological tools, which are then brought together to “perform science or other data-rich applications in this information-driven world” (264). Furthermore, the objectives are vast, where responsibility is placed on many variables: social heterogeneity, data analysis, semantic web, geospatial middleware, citizen-based sciences, geospatial cloud computing, collaboration initiatives. With so much going on simultaneously, it should not come as a surprise that organization is one of the main challenges in GCI. Perhaps covering less ground may lead to higher quality progress.

Despite the many obstacles that GCI must overcome, the advancement of Location-Based Services (LBS) (especially mobile technology) and digital earths have shown the potential for GCI. They are largely ubiquitous due to their user-friendly interfaces. Along with such developments, the attractive end-user interface component is significant. However, should primarily be informative, not just pretty. “The geospatial Semantic Web is a vision in which locations and LBS are fully understood by machines” (268). I believe this vision should be extended to humans also understanding (as close to “fully” as possible) the meaning of the geospatial Semantic Web.

Qwiki is a platform that represents both the semantic web and information processing. A combination of intelligent agent (primary) and human participation (secondary), it is a dynamic, visually emphasized version of Wikipedia. Here we have a conglomeration of different areas, supporting the multi-disciplinary aspect that GCI aims in representing and also the challenge of “how to best present data and information from multiple resources” (268). Qwiki has the potential to help organize enormous amounts of geospatial data from different domains, resources, applications, and cultural backgrounds. That is, if the data becomes digitized. Even though I advocate for quality, I believe quantity in terms of data organization is key as it is the first step towards knowledge building: data to information to knowledge. Organized data, in turn, prepares for advances in other areas of GCI to meet the proposed objectives.

-henry miller

Mash-ups and Interoperability

Friday, February 17th, 2012

Under the section title Enabling Technologies: System Integration Architectures of the Yang et al. paper, mashups and plug-and-play techniques and codes are presented as a fairly simple but wonderful thing in a very positive light. Words such as “ideal”, “easily integrated”, “foundation” and “new” assist in creating this sentiment which to me, seems to downplay some of the serious issues of heterogeneity of components and interoperability. These issues are by no means solved but it is true some advances have been made in increasing interoperability between platforms and systems. One need only look at the multitude of APIs that can be plugged into a webpage to know some interoperability is taking place. Nonetheless, as discussed last class in regard to geovisualization, there remains a large amount of heterogeneity in data, in software and different GCIs. It cannot be forgotten or emphasized enough that regardless of how far we have come in managing some of this data, there will always be more of it in many forms and that new forms of data will continue to emerge.

 

-Outdoor Addict

 

How does one create mental maps with Google Street View to Google Maps?

Friday, February 17th, 2012

Tversky et al. discuss the concept of making a mental map of space around the body whereby a person seems to have three essential axes of reference from their body. From these, the individual then identifies objects more quickly in relation to first the head to feet axis, then the back to front axis and finally the left to right axis. This is the Spatial Framework Theory and was the best theory to explain the observed response times of participants in the study.

What I would like to know is how this theory holds up when the study is performed not by reading about one’s surroundings and then answering questions but by using images to form the reference frame. I say this while thinking of Google Street View where you can see what surrounds you in all directions and build a mental map of what is located around your virtual location. However, if a researcher were to perform the same experiment as above, where you look in one direction first and create your frame of reference to that, how easy is it to then turn in a different direction and answer questions about what is around you using directional terms? Since the head/feet axis is not present, would the left/right axis have faster response times than the back/front axis or would it remain as in the first study?

Additionally, if you were to go to a random (urban) location with Google Street View, how would you mentally situate your understanding of your location in a larger context? How do you take the image you made of the area around you from one place on a street to the next zoom out when you no longer see the visual features on the street, just the street name? This may be an issue with Google visualization that you lose a sense of the direction in which you were looking when you zoom out of the street view but personally it is confusing to try to make sense of what I was looking at in a particular direction and then to lose that directionality when zooming out. In relation to spatial cognition, I wonder how my brain (maybe just mine if no-one else has ever noticed this) loses the mental map it had of that streetscape when the viewpoint is changed from in the scene to top down with straight lines and street names. Shouldn’t it be easier to visualize the street with straight lines and names if the brain schematizes the street view into these things (“nodes and links, landmarks and the paths among them, elements and their spatial relations” (517)) already? If it is an issue of the different perspectives, how can Google work to reduce this confusion with knowledge of these spatial cognition findings?

-Outdoor Addict

GCI and blindness

Friday, February 17th, 2012

The kind of power in the type of GCI we can expect in the future is hard to grasp. Sensors automatically collect petabytes of real-time streaming data that gets send to computers, which harness the computational power of grid computing. Despite the large amounts of data, access and retrieval of information would be easy because computers, being equipped with proper semantics would know what we wanted. We would be able to deal with the most complex issues through multidimensional analysis and achieving excellent interoperability between applications. On one hand, this future is exciting, but on the other, it is also a little daunting to think about how complex systems will become and how well people will be equipped with the ability to question machine outputs.

In part, this is an extension to ClimateNYC’s post. I too am concerned about the opacity of GCIs. Will continuous data feeds really make the world more understandable? The shift in the way science will be done with GCIs will have to be accompanied by an equally educated population, which should include end users as much as developers of the technologies involved. Otherwise, researchers using this technology will not even stand a chance to question the results he obtains. Users must be able to benefit from the amazing potentials of GCI as well as be able to consistently negotiate its terms of developments and the mechanics behind the technologies. The idea of GIC as a black box is a scary one; if we accept it without question, technology (sensors, applications, computational analysis) will be a veil between us and natural phenomena. If we lose the ability to questioning the outcomes we obtain from machines, we will be dominated by technology without even knowing it. Therefore, instead of trying to “relieve scientists and decision makers of the technical details of a GCI”, I believe the opposite must be true. Educating the greater population of the mechanism behind such complex systems is necessary if we do not want to go blind.

– Ally_Nash

Spatial Cognition and Semantics

Friday, February 17th, 2012

To understand geography and turn geographical observations into knowledge and meaning we need to grasp how we (and our body) form spatial relationships with the Earth. This is where spatial cognition can help us. I thought the 3 types of spaces the authors described in the article are very interest and relevant to semantics and ontology building. I am especially intrigued with the fact that our body is our first compass. This makes sense because our body is what gives us physical form and thus allows us to interact spatially with other physical entities, which in turn, is why we care about geography at all. If the way we understand our surroundings begins with our bodies, then the experiences our body has with physical entities must play a part in how we talk about it. For instance, maybe the reason why different cultures use different propositions to describe the same action (e.g. “across the lake” as “go over the lake” or “pass through the lake”) stems from the different experiences which subjects the body into different positioning with respect to the lake. We use different words because the way we understand the world is different depending on type of space we are using. Or in other words, “in each case, schematization reflects the typical kinds of interactions that human beings have with their surroundings” (522).

– Ally_Nash

What can’t GCIs do?

Friday, February 17th, 2012

Beginning reading with no prior awareness of cyberinfrastructures, Yang et al.’s article on Geospatial Cyberinfrastructures was pretty overwhelming. Yang et al. do such an incredible job of condensing huge amounts of information in a way that is fairly easy to follow (despite the multitude of acronyms) that admittedly, I don’t even know where to start in tackling it. What I found most interesting was the outline of the development of the semantic web and the data life cycle.

Throughout all readings in this course so far there has been mention of semantic differences in data, and the need to “facilitate the automatic identification, utilization, and integration of datasets into operational systems,” (272). With GCIs encompassing data from a huge array of different sources and different users (the Virtual Organisations are also really neat), the development of Web 3.0 is incredibly pressing in order to make sense of all this data and ensure interoperability.

I also really liked the section on Supporting the life cycle from data to knowledge. It is important to note that data is not information is not knowledge—it must be processed and synthesised in order to achieve a greater understanding of what the data represents.

Readings like Yang et al. really send home the point that this field is overarching and is growing at an incredible rate, and it’s really exciting to watch.

-sidewalk ballet

 

Cyber-infrastructure and Uncertainty

Thursday, February 16th, 2012

Chaowei Yang et al. very ambitiously discuss the development of geospatial cyberinfrastructure, including some of the challenges confronting this process. One of the aspects I found most interesting was the potential for increasing amounts of error being introduced as more data are generated by an ever growing number of users. The facilitation of a system, which can “collect, archive, share, analyze, visualize, and simulate data, information, and knowledge” increases the accessibility of data to a much wider array of people. While this is beneficial in terms of promoting research, it also, however, allows for a great deal of uncertainty to be introduced as there are are no clear standards for communicating this inherent component of data. Users not familiar with this notion – who are likely also those increasingly gaining access to this infrastructure – may further this problem.

Since the quality of this data may be questionable, ClimateNYC equates the development of GCIs to black boxes, and I think this has severe implications for the future of GIS. Madskiier_JWong, conversely, argues that scientists have much to gain from being able to easily share data with people in other fields, but I would be cautious with this. I am not questioning the notion that sharing data facilitates the production of knowledge. I am, however, concerned that if error and uncertainty are significantly present and not well-communicated, it can lead to severe divides and unnecessary arguments within fields of study. We all know how easily maps and data can be manipulated, for example, to convince a viewer of a point of view, so perhaps issues such as communicating error need to be better addressed as cyberinfrastructures are developed. From this, perhaps data will not only by more freely available, but it will also be more reliable.

– jeremy

Those silly mountains…

Thursday, February 16th, 2012

I can relate to Andrew’s comment about Montrealers accepting a false north, which I find very interesting. In Vancouver, everyone knows that the local mountains are north, however, they aren’t actually. Despite this, it’s close enough for the purposes of traveling around the city and this is an important aspect of mental maps. By no means do I intend to flare up the ‘what is a mountain’ debate, but if a person’s mental map incorrectly associates a geographic feature with a compass reading in order to improve their navigational abilities, what does that say about the accuracy of their mental map?

Perhaps, as Tversky et al. illustrate, this notion reflects upon the nature of how we schematize, a process which accepts a loss of detail to “allow for efficient memory storage, rapid inference-making and integration from multiple media.” On the other hand, there may be more to this issue, as our ability to incorporate an individual’s cognitive map into a GIS is another problem that arises. How can we display and compare the landmarks, nodes, paths, etc. of cognitive maps when they are all, for example, represented at varying scales? To go back a step, how can we even be sure that the process of drawing a mental map isn’t completely fraught with error? These ideas relate to the varying ontologies that exist and trying to reconcile the differences between them, which – as we all know – is an extremely complicated task.

On a somewhat related note, everyone who is a map lover/artist (which I’m sure all of you are) should check out:

http://spacingmontreal.ca/2012/02/14/attention-all-map-lovers-spacings-creative-mapping-contest/

– jeremy

Tversky et al and Micro-Spatial Cognition

Thursday, February 16th, 2012

Tversky et al.’s article on spatial cognition categorizes our understanding of space into three inter-related categories: the outside, navigable world, the space around our bodies, and the space consisting of our bodies. At an individual level, it is axiomatically clear that we do not conceive of ourselves or our surroundings as a 2 dimensional space. It is also interesting to examine how detailed spatial cognition of our body can enable better movement physically.

                There are clear examples of the disconnect between our cognition of our body and representing it in a two dimensional form. Ask any person of the difficulty they had in trying to draw the hand of a person holding an object without a visual reference. This extends beyond the technical expertise of being able to draw the fingers in perspective, since it is difficult to simply imagine how the fingers position themselves in relation to each in two dimensions while remaining ‘realistic’ to our minds. It can be inferred that we cognize space of our bodies and our nearby surroundings in three dimensions (and thus rich in detail) because we have so much experience in these local matters. Does this raise an implication then of our 2-D conceptualizations of the ‘far outside world’ as being relatively poor in detail? By extension, do our two-dimensional maps reinforce a poorly performing cognition of space?

                On another point, I would argue that athletes have a heightened awareness of the space of their surroundings and of their body. The important functional relevance that the authors identified is likely to be stronger and more varied with athletes, since they have a more frequent and wider range of motion. Speaking from personal experience (I do martial arts), being able to mentally picture where my feet will land, where my arms must move, and how much space I require has helped me execute complex moves. Being able to orient yourself mentally is key in sports such as figure skating, and allows people to move better. Geography is in your body!

–  Madskiier_JWong

A Model for Mental Mapping?

Thursday, February 16th, 2012

Tversky et al’s explanation of mental spaces as “built around frameworks consisting of elements and the relations among them,” (516) reminds me of an entity relationship model. The mental framework we have could consist of:

– Entities in line with Lynch’s city elements, and touched on in the Space of Navigation

  • Paths
  • Edges
  • Districts
  • Nodes
  • Landmarks

– Relationships to associate meaning between entities

  • Paths leading to landmarks
  • Edges surrounding districts

–  Attributes distinguishing the characteristics of an entity

  • Significance of a landmark
  • Width of a path (maybe depicting how frequently it is used for travel opposed to actual width)

I agree with other posts that this article needed a greater theoretical grounding within GIS. I struggle to see what cognitive maps can be used for, but with this simplified schema in mind, can we translate these cognitive maps into usable data in a GIS? Maybe, but I think we would have to be very meticulous to grasp the nuances in spatial perception and cognition, and therefore the relationships between entities.

Cognitive mapping methodology stresses the importance of debriefing after the maps are made. Discussions must be held in order to begin to establish reasoning regarding why what things are placed in certain locations, why things are deemed to have greater importance, etc. I don’t think that a simply digitized cognitive map will serve much purpose (as a pedagogical tool or otherwise) without knowing the meaning behind it. Each user will have different experiences leading them to perceive different things—things that I don’t think we can make much sense of without dealing with the nitty-gritty relationships of entities.

-sidewalk ballet