Archive for the ‘506’ Category

Spatial Cognition and Semantics

Friday, February 17th, 2012

To understand geography and turn geographical observations into knowledge and meaning we need to grasp how we (and our body) form spatial relationships with the Earth. This is where spatial cognition can help us. I thought the 3 types of spaces the authors described in the article are very interest and relevant to semantics and ontology building. I am especially intrigued with the fact that our body is our first compass. This makes sense because our body is what gives us physical form and thus allows us to interact spatially with other physical entities, which in turn, is why we care about geography at all. If the way we understand our surroundings begins with our bodies, then the experiences our body has with physical entities must play a part in how we talk about it. For instance, maybe the reason why different cultures use different propositions to describe the same action (e.g. “across the lake” as “go over the lake” or “pass through the lake”) stems from the different experiences which subjects the body into different positioning with respect to the lake. We use different words because the way we understand the world is different depending on type of space we are using. Or in other words, “in each case, schematization reflects the typical kinds of interactions that human beings have with their surroundings” (522).

– Ally_Nash

What can’t GCIs do?

Friday, February 17th, 2012

Beginning reading with no prior awareness of cyberinfrastructures, Yang et al.’s article on Geospatial Cyberinfrastructures was pretty overwhelming. Yang et al. do such an incredible job of condensing huge amounts of information in a way that is fairly easy to follow (despite the multitude of acronyms) that admittedly, I don’t even know where to start in tackling it. What I found most interesting was the outline of the development of the semantic web and the data life cycle.

Throughout all readings in this course so far there has been mention of semantic differences in data, and the need to “facilitate the automatic identification, utilization, and integration of datasets into operational systems,” (272). With GCIs encompassing data from a huge array of different sources and different users (the Virtual Organisations are also really neat), the development of Web 3.0 is incredibly pressing in order to make sense of all this data and ensure interoperability.

I also really liked the section on Supporting the life cycle from data to knowledge. It is important to note that data is not information is not knowledge—it must be processed and synthesised in order to achieve a greater understanding of what the data represents.

Readings like Yang et al. really send home the point that this field is overarching and is growing at an incredible rate, and it’s really exciting to watch.

-sidewalk ballet

 

Cyber-infrastructure and Uncertainty

Thursday, February 16th, 2012

Chaowei Yang et al. very ambitiously discuss the development of geospatial cyberinfrastructure, including some of the challenges confronting this process. One of the aspects I found most interesting was the potential for increasing amounts of error being introduced as more data are generated by an ever growing number of users. The facilitation of a system, which can “collect, archive, share, analyze, visualize, and simulate data, information, and knowledge” increases the accessibility of data to a much wider array of people. While this is beneficial in terms of promoting research, it also, however, allows for a great deal of uncertainty to be introduced as there are are no clear standards for communicating this inherent component of data. Users not familiar with this notion – who are likely also those increasingly gaining access to this infrastructure – may further this problem.

Since the quality of this data may be questionable, ClimateNYC equates the development of GCIs to black boxes, and I think this has severe implications for the future of GIS. Madskiier_JWong, conversely, argues that scientists have much to gain from being able to easily share data with people in other fields, but I would be cautious with this. I am not questioning the notion that sharing data facilitates the production of knowledge. I am, however, concerned that if error and uncertainty are significantly present and not well-communicated, it can lead to severe divides and unnecessary arguments within fields of study. We all know how easily maps and data can be manipulated, for example, to convince a viewer of a point of view, so perhaps issues such as communicating error need to be better addressed as cyberinfrastructures are developed. From this, perhaps data will not only by more freely available, but it will also be more reliable.

– jeremy

Those silly mountains…

Thursday, February 16th, 2012

I can relate to Andrew’s comment about Montrealers accepting a false north, which I find very interesting. In Vancouver, everyone knows that the local mountains are north, however, they aren’t actually. Despite this, it’s close enough for the purposes of traveling around the city and this is an important aspect of mental maps. By no means do I intend to flare up the ‘what is a mountain’ debate, but if a person’s mental map incorrectly associates a geographic feature with a compass reading in order to improve their navigational abilities, what does that say about the accuracy of their mental map?

Perhaps, as Tversky et al. illustrate, this notion reflects upon the nature of how we schematize, a process which accepts a loss of detail to “allow for efficient memory storage, rapid inference-making and integration from multiple media.” On the other hand, there may be more to this issue, as our ability to incorporate an individual’s cognitive map into a GIS is another problem that arises. How can we display and compare the landmarks, nodes, paths, etc. of cognitive maps when they are all, for example, represented at varying scales? To go back a step, how can we even be sure that the process of drawing a mental map isn’t completely fraught with error? These ideas relate to the varying ontologies that exist and trying to reconcile the differences between them, which – as we all know – is an extremely complicated task.

On a somewhat related note, everyone who is a map lover/artist (which I’m sure all of you are) should check out:

http://spacingmontreal.ca/2012/02/14/attention-all-map-lovers-spacings-creative-mapping-contest/

– jeremy

Tversky et al and Micro-Spatial Cognition

Thursday, February 16th, 2012

Tversky et al.’s article on spatial cognition categorizes our understanding of space into three inter-related categories: the outside, navigable world, the space around our bodies, and the space consisting of our bodies. At an individual level, it is axiomatically clear that we do not conceive of ourselves or our surroundings as a 2 dimensional space. It is also interesting to examine how detailed spatial cognition of our body can enable better movement physically.

                There are clear examples of the disconnect between our cognition of our body and representing it in a two dimensional form. Ask any person of the difficulty they had in trying to draw the hand of a person holding an object without a visual reference. This extends beyond the technical expertise of being able to draw the fingers in perspective, since it is difficult to simply imagine how the fingers position themselves in relation to each in two dimensions while remaining ‘realistic’ to our minds. It can be inferred that we cognize space of our bodies and our nearby surroundings in three dimensions (and thus rich in detail) because we have so much experience in these local matters. Does this raise an implication then of our 2-D conceptualizations of the ‘far outside world’ as being relatively poor in detail? By extension, do our two-dimensional maps reinforce a poorly performing cognition of space?

                On another point, I would argue that athletes have a heightened awareness of the space of their surroundings and of their body. The important functional relevance that the authors identified is likely to be stronger and more varied with athletes, since they have a more frequent and wider range of motion. Speaking from personal experience (I do martial arts), being able to mentally picture where my feet will land, where my arms must move, and how much space I require has helped me execute complex moves. Being able to orient yourself mentally is key in sports such as figure skating, and allows people to move better. Geography is in your body!

–  Madskiier_JWong

A Model for Mental Mapping?

Thursday, February 16th, 2012

Tversky et al’s explanation of mental spaces as “built around frameworks consisting of elements and the relations among them,” (516) reminds me of an entity relationship model. The mental framework we have could consist of:

– Entities in line with Lynch’s city elements, and touched on in the Space of Navigation

  • Paths
  • Edges
  • Districts
  • Nodes
  • Landmarks

– Relationships to associate meaning between entities

  • Paths leading to landmarks
  • Edges surrounding districts

–  Attributes distinguishing the characteristics of an entity

  • Significance of a landmark
  • Width of a path (maybe depicting how frequently it is used for travel opposed to actual width)

I agree with other posts that this article needed a greater theoretical grounding within GIS. I struggle to see what cognitive maps can be used for, but with this simplified schema in mind, can we translate these cognitive maps into usable data in a GIS? Maybe, but I think we would have to be very meticulous to grasp the nuances in spatial perception and cognition, and therefore the relationships between entities.

Cognitive mapping methodology stresses the importance of debriefing after the maps are made. Discussions must be held in order to begin to establish reasoning regarding why what things are placed in certain locations, why things are deemed to have greater importance, etc. I don’t think that a simply digitized cognitive map will serve much purpose (as a pedagogical tool or otherwise) without knowing the meaning behind it. Each user will have different experiences leading them to perceive different things—things that I don’t think we can make much sense of without dealing with the nitty-gritty relationships of entities.

-sidewalk ballet

The Space of Navigation

Thursday, February 16th, 2012

The Space of Navigation
Effects of the different elements such as frame of reference (the hierarchical encoding example) were all very interesting. What would be interesting to know though if certain of these elements are more dominant at certain scales. If we could identify that everyone thought at the provincial scale in the hierarchical example’s way, then we could develop a framework on how to communicate and teach people geographic information.

One problem I had with the article though, was that it was very much from a cog. sci. (or similar) approach, rather than a geography/geospatial view. What effect do the statements on our mental representations of space have on how we should do things in the future? Are the different biases observed in the different studies necessarily a bad thing? Should we shape our data output, such as local maps, to meet these methods of storing information in the brain, or should we stick to something that is as ‘accurate’ to being a representation of the ground surface as possible? At the very least, more knowledge on how humans perceive their environment should help us determine what we are doing wrong when presenting geographic information, an area that would have been interesting to see in this article.

-Peck

GCI – a system of systems

Thursday, February 16th, 2012

Sometimes I get the feeling that people view GCI as a single entity/unit. It isn’t. It is, too misquote Yang, a ‘system of systems’. This makes GCIs seem very flexible and able to do anything – as improvements in any of the systems will advance the GCI forward. However, the challenges are still immense. GCIs may be used as a way of sharing, analysing, storing data etc., but they are still limited by the rules we have, such as the semantic framework for sharing of data. However, this may make a GCI start to look a little cumbersome, at least when you are viewing it as a single entity. This is something I am not sure of – whether GCIs are adaptive to changing environments such as ontologies.

Future changes were also interesting. Virtual Organisations could start becoming more permanent as enabling technologies decrease physical limitations. GCIs though are still relatively closed environments though, and may benefit from more open sharing. This is what is expected to happen with the shift to ‘geospatial cloud computing’. However, the article doesn’t really define geospatial cloud computing – what’s the difference? Aren’t we already partly there?

 

-Peck

Someone Call INTERINTERPOL! I’m Being Boarded by Pirates!

Thursday, February 16th, 2012

Cyberinfrastructures are a little outside of my comfort zone so I’m not sure I completely understand how they work, but I like how Yang places it in real world context.

It seems like the purpose of a cyberinfrastructure is to get more done, in less time, in more than one place. Great concept right? I agree that this sounds great, but I agree with some of the criticisms that Yang brings up. What if the type of research that you are performing is illegal in some countries, but not in others? The other day at work I heard some pretty well-known DJ’s talking about piracy. The first DJ was concerned that he would be fined for having pirated material on his computer. The other DJ brushed it off and stated that he would simply move his server to “somewhere like Thailand or Botswana” where it isn’t illegal. What happens if a computer that stores some of that information where it is illegal? Does the person responsible for the content get charged back home even a said law does not exist in that country, or does he have to be extradited first? Is the country where it isn’t illegal have to anything at all? I personally don’t think so, but it brings up some valid concerns. Will an international cyber-law enforcement come to fruition at some point soon? INTERINTERPOL (International Internet Criminal Police Organization) maybe.

On another note, but still somewhat related to international frontiers, I like how Yang continues the debate regarding ontologies. What becomes an official ontological language? How are we supposed to accommodate foreign languages in the web 3.0? Will many key words and ontological definitions from several languages all be linked to one parcel of data labelled “10010101” for example?

I don’t know much about programming or cyberinfrastructures but I get the impression that the issues that Yang brings to the surface are increasingly important. I don’t think that the solutions are impossible, I’m simply curious to know how the laws form around cloud servers and cyberinfrastructures.

Andrew

 

Using GCI Without Thinking

Thursday, February 16th, 2012

Chaowei Yang and the other authors of “Geospatial Cyberinfrastructure: Past, Present and Future” believe that the evolution of GCI “will produce platforms for geospatial science domains and communities to better conduct research and development and to better collect data, access data, analyze data, model and simulate phenomena, visualize data and information, and produce knowledge” (264). However, to borrow from Bruno Latour in Science in Action, you can’t help but wonder how much of all of this might just end up being a black box for many disciplines that utilize geospatial data but don’t question how it’s presented and processed. Could this be the quietest revolution in GIS?

The idea of GCIs as black boxes should come as no surprise. Large numbers of people utilize technology that “brings people, information, and computational tools together to perform science” without questioning the underlying “data resources, network protocols, computing platforms and computational services” (267) that help them attain their goals. By using the term black box, I emphasize the meaning Latour intends that it serves a concept or purpose that most people don’t investigate beyond accepting that those in the GCI field have questioned it and made sure it functions.

While I agree with SAH about the exciting potential “a large infrastructure supporting free global geospatial data” holds, I wonder how many people utilizing this network will truly appreciate it. A great deal of people working in academia no doubt. However, users who don’t possess such an academic background or connections to this community might also interact with and contribute to this data source even as GCI remains a black box. While the democratic aspects of this are exciting, I also wonder how we might filter so much data and use it most productively (and ensure its accuracy) in light of the author’s questions about how best to deal with real time data.

-ClimateNYC

Spatial Cognition and the Elastic Brain

Thursday, February 16th, 2012

Last week I stepped out of the St. Joseph metro station at a stop I don’t normally exit. I gathered my bearings by assessing what surrounded me and began to walk to my final destination. After a second, I stopped to check the map on phone (a reaction I now have—my iPhone apparently rules my life) and realized I was walking in the wrong direction. “Thank iPhone!” I thought to myself, and plugged my headphones in and began the trek towards Saint Laurent. After 15 minutes I stopped and realized I had been walking in the wrong direction. I was right(or left??) all along!

I checked my phone and realized that with the new update, my phone now orients the Google maps in relation to where I’m pointing. Assuming that the map on my phone pointed north, when I was actually facing south, I ended up guiding myself in the wrong direction.

I really enjoyed the article about spatial cognition. It’s fascinating how we instinctively orient our world “North.” I believe, though, that this instinct is not actually instinct. I believe that it’s a product of our upbringing. Montrealer North is never actually north. Collectively, Montrealers accept this false north; we are however aware that our conception of North is in reality more north–west. The human brain is extremely elastic! It has the power to re-orientate itself after wearing “inverted goggles” (perceptual adaptation) and has the power to re-wire language and thought after a stroke (http://www.radiolab.org/2010/aug/09/). I imagine that no matter the convention, the brain has the ability to adapt to such changes.

If only it could turn off the compass on my iPhone…

Andrew

What do we Do with what we Know? Using Spatial Cognition

Wednesday, February 15th, 2012

The “Three Spaces of Spatial Cognition” article on three types of spaces was perhaps an interesting introduction to this way of thinking, but I felt it was lacking in its ability to situate this knowledge within the larger domain of geography.  It seemed evident that there was some agreement on how people perceive themselves with relation to space, and how they perceive space itself, but I would have liked a more in depth discussion of what we have been doing with that knowledge, or how it could be applied.  Perhaps a comprehensive overview would be too much for this one paper, but it would have been useful with regards to conceptualizing how this knowledge is used and useful.

I think there are a few possibilities that would have been pertinent to mention.  For example, maps as we traditionally know them are generally situated in a northward manner, and have common landmarks: roads, rivers, large place names, important topography, and so on.  Is this format useful for humans when thinking of the way we conceptualize space?  If we all orient ourselves based on various prior exposures and development, is it possible for a singular map to suit the needs of many?  Stemming from this would be an interesting question about the future of geovisualization and more dynamic “maps”, such as in-car navigation systems.  How might these be adapted to best suit the needs of the user?  In-car navigation systems often tilt the map based on the direction the car is going, so the next move can be conceptualized with regards to where the driver is facing–is this effective?  Does it make decisions happen faster?

These are the kinds of questions I would have like to have been addressed, or at least mentioned in this introduction, to communicate the importance of understanding WHY this knowledge of ourselves in space is “essential to our very survival”.

sah

The Appeal of GCIs

Tuesday, February 14th, 2012

The concept of geospatial cyberinfrastructures seems to draw from all aspects of GIS: where it came from and where it is going.  The Yang et al article was a very thorough introduction to GCIs and their uses and limitations.  It also seemed to incorporate much advancement in GIS that we have read about over the last few weeks, and presented an opportunity to visualize how all these technologies may work together, their strengths, and their weaknesses.

This topic seems very current, as you hear more and more today of cloud computing, information being held and hosted on the world wide web, and so on.  It emphasized even more the need for shared knowledge and languages, good metadata, and fast processing.  The wealth of possibilities for GCI, as well as the inclusion of domains where it is already useful, was an interesting aspect of this article as well.

What I found to be the most important limitation, that seemed to run through not only most domains and uses mentioned in this article, but recalled as well many of the other tools we have discussed, was the difficulty in dealing with immense amounts of constantly flowing, real-time data.  This issue in itself seems to incorporate many of the needs mentioned above, and is really the crux of what, in my reading, GCIs are about: the ability to successfully, quickly, and knowledgeably share information, questions, and expertise, analyze and upload data, and more.  However, I agree with Madskiier in their suggestion that GCIs are very global by nature, and thus would presume that, through adequate cooperation, this could be a task undertaken by many, as opposed to just a few.

As a student, I found this prospect incredibly interesting, and it drew my mind to the countless hours spent searching for geospatial data for simple research projects.  While students perhaps a fewer connections than established scientists, we also have the power of McGill behind us–and yet finding (good) data is still tremendously time consuming and challenging in many cases.  The idea of a large infrastructure supporting free global geospatial data is quite appealing, and something I hope to see come to fruition.

sah

Yang et al and the Politics of Geospatial Cyberinfrastructure

Tuesday, February 14th, 2012

This article gives a comprehensive summary of the functions geospatial cyberinfrastructure (GCI) provide to the public. Yang et al. detail the interlocking/interdependent nature of GCI components that allow the storage, processing, and sharing of vast amounts of data.

            I found that Yang et al. impressed the near-physicality of building and constructing GCI to keep up with our data demands, much like building new roads to handle increased traffic. From the article, it is clear that GCI is the fledgling structure that must support the burden of terabytes of data. The major difference in my view is that GCI is a global, common property unlike roads that only benefit domestic drivers.

            The upshot of the global necessity of GCIs is its inevitable politicization. While the authors stress the scientific and technological benefits of improved GCI, it understresses the political tensions that oppose standardized CIs. Two such examples are science domains eager to stake claim to their own turf and uniqueness (mentioned by the authors), and everyday citizens that have privacy concerns of being monitored and having their information integrated into a large database (see the outrage following every update of Facebook’s policies). These issues pose as significant a challenge as technological problems of cross-integration.

            I truly believe that the politics of turf-staking will fade with the advent of more data sharing made possible with improved GCI. Authoritative scientists just have too much to gain in being able to easily access other fields’ data and advance their own understandings. The general public is even more malleable than purist scientists in this regard and is unlikely to care about what their work is labelled as; their entry into ‘sciences’ is possible due to the flexibility and ease-of-access of open-source online software. The second challenge of privacy concerns is more complicated to me, particularly given the migration of data’s lifecycle onto the Internet (recall that Yang defines lifecycles as getting, validating, documenting, analyzing, and supporting decisions). In the past, data was often only offered online as raw acquired data or as finished products. As more controversial analyses become more visible online due to data-discovery GCIs, this will most likely touch off a firestorm of public debate over the pros and cons of a well-integrated and pervasive GCI.

– Madskiier_JWong

Marginalized communities and qualitative data

Friday, February 10th, 2012

Throughout reading Elwood’s article, marginalized communities came to mind, mostly because of the certain level of rigidity in her review of emerging geoviz technologies. I found it particularly interesting of the comparison that was made between ‘public’ and ‘expert’ technologies, where the status-quo of GIS comprises of the ‘expert’ (standardization of data) realm is threatened by the ‘public’ (wiki, geo-tagging, Web 2.0, VGI) realm. I agree with Andrew “GIS” Funa’s point on standardization. What is our inherent need to do this with all of our data? And what happens when standardization cannot be applied? More specifically, how relevant is an expert technology to marginalized communities if no one is willing to apply that technology?

There is a mention of ‘excitement’ and high hopes, which authors have for new geoviz technologies to represent urban environments; however the article does not expand any further. The article does, however, note the term ‘naive geography’ and its “qualitative forms of spatial reasoning” (259). Presuming one can safely state that representing marginalized populations is a qualitative problem, ‘expert’ technologies tend to not focus on these issues. According to Elwood, qualitative problems are more difficult than quantitative problems, “where exact measurements or consistent mathematical techniques are more easily handled” (259). So what do we do about unstructured, shifting, context-dependent human thought? So should we not try to digitally represent these data because it may be too difficult to decipher? To draw linkages and discover patterns? Will qualitative data always be at a loss because it will not fit an exact algorithm? I think we should take the spark of hope that MacEachren and Kraak gave us and strive beyond some of the limitations outlined by Elwood.

-henry miller

So many challenges, so many opportunities

Friday, February 10th, 2012

MacEachren and Kraak address the notion of visualizing the world and what this exactly entails. The article was written over a decade ago and is still as relevant today as it was then, and centuries ago. “…80 percent of all digital data generated today include geospatial referencing” (1). A powerful sentence that altered my perspective on geographic visualization (geoviz), when I first read this article a few years ago. There is so much to explore, to reveal; the sky is the limit.  Geoviz is about transformations and dichotomies; the unknown versus known, public versus private, and high versus low-map interaction (MacEachren, 1994). It aims to determine how data can be translated into information that can further be transformed into knowledge. MacEachren and Kraak provide a critical perspective into the world of geoviz and its vexing problems. They do a good job in convincing us that a map is more than a map. Maps have evolved by means that “maps [are] no longer conceived of a simply graphic representations of geographic space, but as dynamic portals to interconnected, distributed, geospatial data resources” (3). “Maps and graphics…do more than ‘make data visible’, they are active instruments in the users’ thinking process” (3).

Out of the many challenges that we still face (also by Elwood) there are some that have been tackled successfully. The one I will focus on is ‘interfaces’ in relation to digital earths. Arguably, I believe that no one would have imagined the progress made with digital earths, especially Google Earth (GE) back in 2001. GE remains untouchable in its user-friendly display, mash-ups are through the help of Volunteered Geographic Information(VGI), including programmers who are contributing free software, interoperable with GE (GE Graph, Sgrillo). However, the abstract versus realism issue is relevant as ever. The quality and accuracy of the data may be low yet the information visualized will look pristine, and vibrant, thus deceive the user to believe otherwise. How do we then address levels of accuracy? Abstraction? Realism? Thus, we have challenges but we also have progress. MacEachren and Kraak’s article refocuses our attention on the pertinent obstacles that we should be mindful when exploring, discovering, creating or communicating geoviz. To move away from the “one tool fits all mentality” (8). To unleash the creativity from within.

MacEachren’s simple yet powerful geovisualization cube.

 

-henry miller

Heterogeneity in Geovisualization Research

Friday, February 10th, 2012

In the Paper of Sarah Elwood 2008, one of the most important features of current Geovisualization research is concluded as “heterogeneity”. First, the sources from which geographic information are collected for visualization is heterogeneous. Nowadays, users can publish their geospatial information through GeoWeb applications, mobile technologies, and social network media. Moreover, remote sensing technologies continuously provide earth observation data with fine spatiotemporal and spectral resolution. And different geospatial databases open another portal for geographic information science research.

Secondly, the geospatial information with Geovisualization becomes heterogeneous. Currently, Geovisualization is no longer limited within professional community, but users can customize it with well-designed Geovisualization tools. Due to different user interests, the geospatial information that they choose to visualize are heterogeneous. For example, GoogleMap can display the information about Chinese restaurants in Montreal, but users still need to access restaurant discussion board to determine which one they will go for diner. All those geospatial information is displayed to users via different Geovisualization tools.

Thirdly, the usages of the heterogeneous Geovisualization tools are heterogeneous. Some GeoWeb are developed for government management, so the geospatial information is carefully analyzed for decision-making support. For emergence system, we require the geospatial data are collected and updated in real-time and geographic location information should be provided with high accuracy. Although these two systems might be developed based on GoogleMap, their architecture are quite different due to their heterogeneous usage.

Finally, the users of Geovisualization system are also heterogeneous. They can be travel agency, business analyst, research scientists and so on. The heterogeneity of Geovisualization has greatly increased the complexity of GIS research, which require corresponding heterogeneous research methodologies.

–cyberinfrastructure

Cartography 2.0: Mapping a Web of Information

Friday, February 10th, 2012

Mapping as cartographer James Cook knew it is no more, but yet still fully present. Confused? Let me explain. In their paper entitled “Research Challenges in Geovisualization”, Maceachren and Kraak state that maps of the past were designed to be not only a visual aid to navigation, but also to be a database of spatial information (pg 3) such as place names, bays, coves, cities and related information such as their position (absolute and relative) and the distance between them and neighbouring features, to name a few.

Today, mapping is still very much a graphical aid to data visualization, but unlike in the past, maps are not just a static database of places and locations.  Today’s Geo-Web 2.0 and data visualizations platforms like Google Earth can do so much more than display local data, they have the whole internet as a database (pg 3) and can draw on information located in servers and on subjects all over the world with a single URL or script.

This means that the possibilities of today’s cartography are endless, we are not even limited to two, or even three dimensions any more.  Visual Earths (a 3D surface) can display 2D map data, 3D details such as buildings and topography and most importantly, changes over time with time sequenced raster playback.  In fact, display of change over time in Virtual Earths, rudimentary as it is, is still as good as, if not better than, many of the solutions proposed by GIScientists for use in traditional GIS analysis.

In conclusion, mapping today is just as useful as traditional maps, but more so.  We may not all be Cook, but we have access to a very powerful set of geo-visualization and analysis tools today that can only spell great things for our future and the future of GIS.

-rsmithlal

Is enriching data feasible?

Friday, February 10th, 2012

One strategy suggested by Elwood is that “enriching data with information will help the user assess heterogeneity” although to me this does not seem to assist with solving or managing the problem of data heterogeneity. It has been mentioned in class that data is not typically well documented in GIS and that one way to provide information about it is to create metadata. In the world of the internet where massive amounts of data now have spatial references, and in many cases change rapidly, it is not practical to try to provide more information about every piece of data to try to reduce heterogeneity and standardize the data. Since additional data about data would provide even more information to sift through, this also seems rather counterintuitive. While I recognize there is heterogeneity in data, I do not understand the use of assessing heterogeneity but see instead much more use in actually working with heterogeneous data and focusing more time and effort in promoting methods to do this such as in particular contexts as mentioned by Elwood.

-Outdoor Addict

 

Evil 2.0: Surveillance, Tracking and Privacy with the “New GIS on the Block”

Friday, February 10th, 2012

Geospatial technology, and GIS in particular, have long been associated with the war effort.  To label GIS as part of the war machine is not my intention in this post, but to highlight the similarities between this new generation of Geospatial web and the old GIS standard that we’ve all come to love and hate.  What is referred to as the new geospatial web includes geovisualization Applications such as Google Maps, Google Earth and Open Street Map.

In her paper entitled “Geographic Information Science”, Elwood states that to certain scholars view this new generation of “not-quite GIS” as a continuation and proliferation of old military ideas of GIS, namely in her article being new ways of tracking individuals, exclusion from events and other situations as well as what I feel to be most important, steadily decreasing privacy protection. Starting with older social networks such as Hi5, Xanga and MySpace, and then most noticeably with Facebook, we have been steadily sharing more and more information about ourselves on the web.

With the recent widespread use of Google Maps and other geo-visualization technologies such as foursquare, we are now publicizing our very position down to the (x,y) co-ordinates, at a rate which is alarming at best, and perhaps disturbing at worst.  This geospatial information can be used to find you, stalk you and even abduct you, if some government agency ever desired so.  Perhaps in a less serious note, this can be used to determine when you are not at home and your daily patterns, such that someone would be able to break into your home and have a generally good idea as to whether or not you’ll be home.

In her paper, Elwood give an example of a website called www.rottenneighbours.com, where users are encouraged to submit information about their neighbour’s bad habits and unkindly activities to be published on an application based off of the Google Maps API.  The idea of posting info on your neighbours online could be damaging to the poster’s reputation if the comments were able to be traced back to their origin.

I personally feel that this over-zealous sharing of spatial information is alarming, as users seem not to be aware of the dangers inherent in publicizing your location information.  When combined with geo-visualization technologies and applications such as Google Maps and particularly Foursquare and Google Latitude (whose whole purpose is to let other know where you are at any given time).

The link below contains a satirical video created by the Onion News Network (A satirical news network known for portraying fake news in a matter-of-fact way.  This video makes reference to facebook being an application developed by the CIA to harvest personal information about users and save the CIA money and man-hours in the field. It is a comical look at how crazy it is that we continually post personal information on the ever-public interwebz.

CIA’s ‘Facebook’ Program Dramatically Cut Agency’s Costs

-rsmithlal