Archive for February, 2012

What do we Do with what we Know? Using Spatial Cognition

Wednesday, February 15th, 2012

The “Three Spaces of Spatial Cognition” article on three types of spaces was perhaps an interesting introduction to this way of thinking, but I felt it was lacking in its ability to situate this knowledge within the larger domain of geography.  It seemed evident that there was some agreement on how people perceive themselves with relation to space, and how they perceive space itself, but I would have liked a more in depth discussion of what we have been doing with that knowledge, or how it could be applied.  Perhaps a comprehensive overview would be too much for this one paper, but it would have been useful with regards to conceptualizing how this knowledge is used and useful.

I think there are a few possibilities that would have been pertinent to mention.  For example, maps as we traditionally know them are generally situated in a northward manner, and have common landmarks: roads, rivers, large place names, important topography, and so on.  Is this format useful for humans when thinking of the way we conceptualize space?  If we all orient ourselves based on various prior exposures and development, is it possible for a singular map to suit the needs of many?  Stemming from this would be an interesting question about the future of geovisualization and more dynamic “maps”, such as in-car navigation systems.  How might these be adapted to best suit the needs of the user?  In-car navigation systems often tilt the map based on the direction the car is going, so the next move can be conceptualized with regards to where the driver is facing–is this effective?  Does it make decisions happen faster?

These are the kinds of questions I would have like to have been addressed, or at least mentioned in this introduction, to communicate the importance of understanding WHY this knowledge of ourselves in space is “essential to our very survival”.

sah

The Appeal of GCIs

Tuesday, February 14th, 2012

The concept of geospatial cyberinfrastructures seems to draw from all aspects of GIS: where it came from and where it is going.  The Yang et al article was a very thorough introduction to GCIs and their uses and limitations.  It also seemed to incorporate much advancement in GIS that we have read about over the last few weeks, and presented an opportunity to visualize how all these technologies may work together, their strengths, and their weaknesses.

This topic seems very current, as you hear more and more today of cloud computing, information being held and hosted on the world wide web, and so on.  It emphasized even more the need for shared knowledge and languages, good metadata, and fast processing.  The wealth of possibilities for GCI, as well as the inclusion of domains where it is already useful, was an interesting aspect of this article as well.

What I found to be the most important limitation, that seemed to run through not only most domains and uses mentioned in this article, but recalled as well many of the other tools we have discussed, was the difficulty in dealing with immense amounts of constantly flowing, real-time data.  This issue in itself seems to incorporate many of the needs mentioned above, and is really the crux of what, in my reading, GCIs are about: the ability to successfully, quickly, and knowledgeably share information, questions, and expertise, analyze and upload data, and more.  However, I agree with Madskiier in their suggestion that GCIs are very global by nature, and thus would presume that, through adequate cooperation, this could be a task undertaken by many, as opposed to just a few.

As a student, I found this prospect incredibly interesting, and it drew my mind to the countless hours spent searching for geospatial data for simple research projects.  While students perhaps a fewer connections than established scientists, we also have the power of McGill behind us–and yet finding (good) data is still tremendously time consuming and challenging in many cases.  The idea of a large infrastructure supporting free global geospatial data is quite appealing, and something I hope to see come to fruition.

sah

Yang et al and the Politics of Geospatial Cyberinfrastructure

Tuesday, February 14th, 2012

This article gives a comprehensive summary of the functions geospatial cyberinfrastructure (GCI) provide to the public. Yang et al. detail the interlocking/interdependent nature of GCI components that allow the storage, processing, and sharing of vast amounts of data.

            I found that Yang et al. impressed the near-physicality of building and constructing GCI to keep up with our data demands, much like building new roads to handle increased traffic. From the article, it is clear that GCI is the fledgling structure that must support the burden of terabytes of data. The major difference in my view is that GCI is a global, common property unlike roads that only benefit domestic drivers.

            The upshot of the global necessity of GCIs is its inevitable politicization. While the authors stress the scientific and technological benefits of improved GCI, it understresses the political tensions that oppose standardized CIs. Two such examples are science domains eager to stake claim to their own turf and uniqueness (mentioned by the authors), and everyday citizens that have privacy concerns of being monitored and having their information integrated into a large database (see the outrage following every update of Facebook’s policies). These issues pose as significant a challenge as technological problems of cross-integration.

            I truly believe that the politics of turf-staking will fade with the advent of more data sharing made possible with improved GCI. Authoritative scientists just have too much to gain in being able to easily access other fields’ data and advance their own understandings. The general public is even more malleable than purist scientists in this regard and is unlikely to care about what their work is labelled as; their entry into ‘sciences’ is possible due to the flexibility and ease-of-access of open-source online software. The second challenge of privacy concerns is more complicated to me, particularly given the migration of data’s lifecycle onto the Internet (recall that Yang defines lifecycles as getting, validating, documenting, analyzing, and supporting decisions). In the past, data was often only offered online as raw acquired data or as finished products. As more controversial analyses become more visible online due to data-discovery GCIs, this will most likely touch off a firestorm of public debate over the pros and cons of a well-integrated and pervasive GCI.

– Madskiier_JWong

Marginalized communities and qualitative data

Friday, February 10th, 2012

Throughout reading Elwood’s article, marginalized communities came to mind, mostly because of the certain level of rigidity in her review of emerging geoviz technologies. I found it particularly interesting of the comparison that was made between ‘public’ and ‘expert’ technologies, where the status-quo of GIS comprises of the ‘expert’ (standardization of data) realm is threatened by the ‘public’ (wiki, geo-tagging, Web 2.0, VGI) realm. I agree with Andrew “GIS” Funa’s point on standardization. What is our inherent need to do this with all of our data? And what happens when standardization cannot be applied? More specifically, how relevant is an expert technology to marginalized communities if no one is willing to apply that technology?

There is a mention of ‘excitement’ and high hopes, which authors have for new geoviz technologies to represent urban environments; however the article does not expand any further. The article does, however, note the term ‘naive geography’ and its “qualitative forms of spatial reasoning” (259). Presuming one can safely state that representing marginalized populations is a qualitative problem, ‘expert’ technologies tend to not focus on these issues. According to Elwood, qualitative problems are more difficult than quantitative problems, “where exact measurements or consistent mathematical techniques are more easily handled” (259). So what do we do about unstructured, shifting, context-dependent human thought? So should we not try to digitally represent these data because it may be too difficult to decipher? To draw linkages and discover patterns? Will qualitative data always be at a loss because it will not fit an exact algorithm? I think we should take the spark of hope that MacEachren and Kraak gave us and strive beyond some of the limitations outlined by Elwood.

-henry miller

So many challenges, so many opportunities

Friday, February 10th, 2012

MacEachren and Kraak address the notion of visualizing the world and what this exactly entails. The article was written over a decade ago and is still as relevant today as it was then, and centuries ago. “…80 percent of all digital data generated today include geospatial referencing” (1). A powerful sentence that altered my perspective on geographic visualization (geoviz), when I first read this article a few years ago. There is so much to explore, to reveal; the sky is the limit.  Geoviz is about transformations and dichotomies; the unknown versus known, public versus private, and high versus low-map interaction (MacEachren, 1994). It aims to determine how data can be translated into information that can further be transformed into knowledge. MacEachren and Kraak provide a critical perspective into the world of geoviz and its vexing problems. They do a good job in convincing us that a map is more than a map. Maps have evolved by means that “maps [are] no longer conceived of a simply graphic representations of geographic space, but as dynamic portals to interconnected, distributed, geospatial data resources” (3). “Maps and graphics…do more than ‘make data visible’, they are active instruments in the users’ thinking process” (3).

Out of the many challenges that we still face (also by Elwood) there are some that have been tackled successfully. The one I will focus on is ‘interfaces’ in relation to digital earths. Arguably, I believe that no one would have imagined the progress made with digital earths, especially Google Earth (GE) back in 2001. GE remains untouchable in its user-friendly display, mash-ups are through the help of Volunteered Geographic Information(VGI), including programmers who are contributing free software, interoperable with GE (GE Graph, Sgrillo). However, the abstract versus realism issue is relevant as ever. The quality and accuracy of the data may be low yet the information visualized will look pristine, and vibrant, thus deceive the user to believe otherwise. How do we then address levels of accuracy? Abstraction? Realism? Thus, we have challenges but we also have progress. MacEachren and Kraak’s article refocuses our attention on the pertinent obstacles that we should be mindful when exploring, discovering, creating or communicating geoviz. To move away from the “one tool fits all mentality” (8). To unleash the creativity from within.

MacEachren’s simple yet powerful geovisualization cube.

 

-henry miller

Heterogeneity in Geovisualization Research

Friday, February 10th, 2012

In the Paper of Sarah Elwood 2008, one of the most important features of current Geovisualization research is concluded as “heterogeneity”. First, the sources from which geographic information are collected for visualization is heterogeneous. Nowadays, users can publish their geospatial information through GeoWeb applications, mobile technologies, and social network media. Moreover, remote sensing technologies continuously provide earth observation data with fine spatiotemporal and spectral resolution. And different geospatial databases open another portal for geographic information science research.

Secondly, the geospatial information with Geovisualization becomes heterogeneous. Currently, Geovisualization is no longer limited within professional community, but users can customize it with well-designed Geovisualization tools. Due to different user interests, the geospatial information that they choose to visualize are heterogeneous. For example, GoogleMap can display the information about Chinese restaurants in Montreal, but users still need to access restaurant discussion board to determine which one they will go for diner. All those geospatial information is displayed to users via different Geovisualization tools.

Thirdly, the usages of the heterogeneous Geovisualization tools are heterogeneous. Some GeoWeb are developed for government management, so the geospatial information is carefully analyzed for decision-making support. For emergence system, we require the geospatial data are collected and updated in real-time and geographic location information should be provided with high accuracy. Although these two systems might be developed based on GoogleMap, their architecture are quite different due to their heterogeneous usage.

Finally, the users of Geovisualization system are also heterogeneous. They can be travel agency, business analyst, research scientists and so on. The heterogeneity of Geovisualization has greatly increased the complexity of GIS research, which require corresponding heterogeneous research methodologies.

–cyberinfrastructure

Cartography 2.0: Mapping a Web of Information

Friday, February 10th, 2012

Mapping as cartographer James Cook knew it is no more, but yet still fully present. Confused? Let me explain. In their paper entitled “Research Challenges in Geovisualization”, Maceachren and Kraak state that maps of the past were designed to be not only a visual aid to navigation, but also to be a database of spatial information (pg 3) such as place names, bays, coves, cities and related information such as their position (absolute and relative) and the distance between them and neighbouring features, to name a few.

Today, mapping is still very much a graphical aid to data visualization, but unlike in the past, maps are not just a static database of places and locations.  Today’s Geo-Web 2.0 and data visualizations platforms like Google Earth can do so much more than display local data, they have the whole internet as a database (pg 3) and can draw on information located in servers and on subjects all over the world with a single URL or script.

This means that the possibilities of today’s cartography are endless, we are not even limited to two, or even three dimensions any more.  Visual Earths (a 3D surface) can display 2D map data, 3D details such as buildings and topography and most importantly, changes over time with time sequenced raster playback.  In fact, display of change over time in Virtual Earths, rudimentary as it is, is still as good as, if not better than, many of the solutions proposed by GIScientists for use in traditional GIS analysis.

In conclusion, mapping today is just as useful as traditional maps, but more so.  We may not all be Cook, but we have access to a very powerful set of geo-visualization and analysis tools today that can only spell great things for our future and the future of GIS.

-rsmithlal

Is enriching data feasible?

Friday, February 10th, 2012

One strategy suggested by Elwood is that “enriching data with information will help the user assess heterogeneity” although to me this does not seem to assist with solving or managing the problem of data heterogeneity. It has been mentioned in class that data is not typically well documented in GIS and that one way to provide information about it is to create metadata. In the world of the internet where massive amounts of data now have spatial references, and in many cases change rapidly, it is not practical to try to provide more information about every piece of data to try to reduce heterogeneity and standardize the data. Since additional data about data would provide even more information to sift through, this also seems rather counterintuitive. While I recognize there is heterogeneity in data, I do not understand the use of assessing heterogeneity but see instead much more use in actually working with heterogeneous data and focusing more time and effort in promoting methods to do this such as in particular contexts as mentioned by Elwood.

-Outdoor Addict

 

Evil 2.0: Surveillance, Tracking and Privacy with the “New GIS on the Block”

Friday, February 10th, 2012

Geospatial technology, and GIS in particular, have long been associated with the war effort.  To label GIS as part of the war machine is not my intention in this post, but to highlight the similarities between this new generation of Geospatial web and the old GIS standard that we’ve all come to love and hate.  What is referred to as the new geospatial web includes geovisualization Applications such as Google Maps, Google Earth and Open Street Map.

In her paper entitled “Geographic Information Science”, Elwood states that to certain scholars view this new generation of “not-quite GIS” as a continuation and proliferation of old military ideas of GIS, namely in her article being new ways of tracking individuals, exclusion from events and other situations as well as what I feel to be most important, steadily decreasing privacy protection. Starting with older social networks such as Hi5, Xanga and MySpace, and then most noticeably with Facebook, we have been steadily sharing more and more information about ourselves on the web.

With the recent widespread use of Google Maps and other geo-visualization technologies such as foursquare, we are now publicizing our very position down to the (x,y) co-ordinates, at a rate which is alarming at best, and perhaps disturbing at worst.  This geospatial information can be used to find you, stalk you and even abduct you, if some government agency ever desired so.  Perhaps in a less serious note, this can be used to determine when you are not at home and your daily patterns, such that someone would be able to break into your home and have a generally good idea as to whether or not you’ll be home.

In her paper, Elwood give an example of a website called www.rottenneighbours.com, where users are encouraged to submit information about their neighbour’s bad habits and unkindly activities to be published on an application based off of the Google Maps API.  The idea of posting info on your neighbours online could be damaging to the poster’s reputation if the comments were able to be traced back to their origin.

I personally feel that this over-zealous sharing of spatial information is alarming, as users seem not to be aware of the dangers inherent in publicizing your location information.  When combined with geo-visualization technologies and applications such as Google Maps and particularly Foursquare and Google Latitude (whose whole purpose is to let other know where you are at any given time).

The link below contains a satirical video created by the Onion News Network (A satirical news network known for portraying fake news in a matter-of-fact way.  This video makes reference to facebook being an application developed by the CIA to harvest personal information about users and save the CIA money and man-hours in the field. It is a comical look at how crazy it is that we continually post personal information on the ever-public interwebz.

CIA’s ‘Facebook’ Program Dramatically Cut Agency’s Costs

-rsmithlal

 

Redefining the Map

Thursday, February 9th, 2012

The article about by MacEachren and Kraak was excellent in its introduction to the challenges of geovisualization while simultaneously fuelling the imagination as to the possibilities of these technologies. As a geographer, I too admittedly love maps as Andrew GIS also stated. One of the things that fascinated me in this article is that the map has been redefined, a fact that is well advertised by these authors. I have chosen to extract the various phrases used to define what maps are today in order to emphasize this point. Maps are now “inexpensive”, they are “dynamic portals”, they are “interfaces”, they are “realistic” yet “abstract”, they are “forms of representation”, “active instruments in the user’s thinking process” and “metaphors in design of non-geospatial visualization tools” (although I admit I am not exactly sure what this last one means). A picture may be worth a thousand words and although a paper map is more than a picture and worth many words, maps, today, cannot be quantified in terms of a mere thousand or even million words.  I do not mean to say maps were not some of these things in the past but today they are even more than they ever have been. This makes their understanding and analysis more pressing than ever before and provides the field of geography with yet more reason to expand into the digital realm through more than our largely static structure based GIS.

-Outdoor Addict

Cognition’s Role in Geovisulation Research Programs

Thursday, February 9th, 2012

In their article outlining the research challenges faced by the field of Geovisualization, Alan MacEachren and Menno-Jan Kraak pose the problem of cognition as a direct relationship between how external, dynamic visual representations can “serve as prompts for creation and use of mental representations” (7). They note that the existing lack of paradigms for how to conduct research into the cognitive processes at work in geovisualization projects or into their usability as a major problem in this field. However, I wonder if this doesn’t put the cart before the horse.

Much of the existing research into geospatial cognition seeks to understand how the human mind works in processing spatial data, particularly how such data is acquired, processed and translated into knowledge. Before we can hope to create user interfaces utilizing geovisualization techniques, shouldn’t we follow this approach and attempt to understand how these digital interfaces might impact cognition of spatial data? The authors set out the goals of establishing a cognitive theory that supports and assess the usability of “methods for geovisualization” and those that take advantage of dynamic, animated displays (7). Yet this feels like we are trying to support the cognition of a new field without trying to understand how it actually impacts cognition.

The danger of such an approach is that we are simply writing theory to support pre-articulated goals. Shouldn’t we instead start from a blank slate and then ask what types of cognitive impacts geovisualization might have for how the public processes geospatial data? For example, one researcher into geospatial cognition found that people who learn geographic data from maps as opposed to experiential data (as in navigating an environment) often had better recall of the data and more accurate perceptions of spatial relationships. Shouldn’t we try to first figure out how cognition of geovisualized data fits into this paradigm before just drafting a research agenda for it?

–ClimateNYC

Standardize, Standardize, Standardize

Thursday, February 9th, 2012

The Elwood piece is less focused on geovisualization than the MacEachren and Kraak article. It suggested that in order to make data more global we should standardize the data. When discussing Kuhn’s piece Professor Sieber noted that much like German culture in general, that the data was extremely structured. Apparently the models designed by Kuhn run very well because the data is well structured.

That being said with the amount of data streaming in everyday it seems unfathomable to standardize everything. Elwood suggests that automated standardization is a possibility but this idea scares me. Imagine a world where you cannot control your own data. It seems that this reality is approaching everyday (with the recent blackout protest suggesting imminence). Schuurman also adds that automated data standardization may not be adequate due to dynamic data sets; individuals, constantly modifying data may be difficult to anticipate. What happens when data is changed and standardized? What happens when one parcel of data under a certain label is relabelled? Will a user be notified of the change? Or will the data be transported to another location?

Andrew GIS

Geovisualization: room for collaboration and virtual environments

Thursday, February 9th, 2012

The article by MacEachren and Kraak’s is a great article to read because not only did they highlight important challenges in geovisualization but also attend to overarching issues and what kind of actions are needed to address them (which I particularly enjoyed). I strongly agree with the authors when they point out that if we are to meet these challenges, there needs to be an increased emphasis on collaboration between disciplines and countries. Further, researchers themselves must appreciate other perspectives and make a real effort to understand how other disciplines understand the issue by keeping up with “complementary research” and getting involved with collaborative work.

The article by MacEachren and Kraak’s is a great article to read because not only did they highlight important challenges in geovisualization but also attend to overarching issues and what kind of actions are needed to address them (which I particularly enjoyed). I strongly agree with the authors when they point out that if we are to meet these challenges, there needs to be an increased emphasis on collaboration between disciplines and countries. Further, researchers themselves must appreciate other perspectives and make a real effort to understand how other disciplines understand the issue by keeping up with “complementary research” and getting involved with collaborative work.

One area of research related to geovisualization that sparked my interest is the potentials of virtual environments. The tension between the need for abstraction or realism in visualization is intriguing to me and would be something I am interested in to explore in more depth. Although abstraction is appropriate/useful for certain problems, the experiential qualities VE offers could be very beneficial for geographic decision-making and alternative thinking, especially since the scales of some geographic problems are very large (climate change) and thus more difficult to envision. Further, a realistic geovisualization of our environment with dynamic access to the information on the Internet could prove to be extremely valuable for educating students.

-Ally_Nash

The urgency of geo-visualization technologies?

Thursday, February 9th, 2012

Elwood offers us an introduction to the challenges of geo-visualization and the integration of data. One of the major issues facing geo-visualization is the sheer amount of data that are being generated, which is also very heterogeneous. Considering the vastly increasing amounts of data, I can’t help but be under the impression that there is a sense of urgency for the cause of more effectively “incorporat[ing] spatial knowledge into digital environments.” Perhaps this urgency comes from the notion of a group of researchers time-consumingly creating a universal ontology, as was seen in last weeks readings. As the data pile up, the time required to unite all of it also increases.

But, is there a sense of urgency? As stated by Elwood, data are being created in an increasingly dynamic manner, which can be used in very diverse ways. She also discusses the use of metadata, so perhaps providing this will be key will be to ensure that future use of all this data will not be hindered. This will likely be a difficult task, as it is hard to imagine what type of information will be needed for future ontologies. Like MacEachren and Kraak posit, however, creativity as well as efficiency is spurred by these connections. Therefore, it is not necessarily urgent for us to figure this all out now, but we’re probably missing out on some interesting connections and revelations.

– jeremy

Sptatial cognition and geovisualization

Thursday, February 9th, 2012

The topic of spatial cognition (and closely related, naïve geography) was relevant to the issues discussed by both Elwood as well as MacEachren & Kraak. The ways humans learn geographic concepts and reason about space is required for geovisualization to “handle qualitative forms of spatial knowledge” (Elwood, 259) and for building “human-centered approach to geovisualization” (MacEachren & Kraak). I believe developments in this field are urgently needed and have far-reaching implications not only for geovisualization but also for building ontologies. In fact, Smith and Mark also touch on the lack of research by stating “We know of no data on the ages at which young children acquire or master the basic concepts of naïve geography and the associated kinds of objects…” (10).

With a growing amount of geo-located SMS, pictures and videos, how can we process these qualitative information without grasping how it is that the contributors comprehend their surroundings? Since users are also contributors in the Web 2.0 environment, it is evitable that we must dedicate resources to understand these users. For instance, how do people learn and remember directions? How do people from different cultures use landmarks, whether natural or man-made? Only by understanding how people build their relationship with geographic space can we take more initiative in the geovisualizing process and derive meaning out of spatial descriptions (near, far..). As a side note, I imagine it would also be important to first identify what the source data was initially intended for because the context could influence how spatial forms are perceived and described. For example an emergency text message and a text message trying to rent out an apartment could be very different — the first message is influenced by panic and thus, the users might have a distorted conception of distances whereas the second message is motivated by the intention to sale and thus everything might be described as “near” the apartment.
-Ally_Nash

Visualisation technology in its broadest sense

Thursday, February 9th, 2012

Elwood mentions many technologies/applications but seems to focus on geoweb and VGI. However, these are hardly the only interesting and new developments in visualisation. Wiki maps, Google Maps, and other internet based mapping tools all do the same thing – they work on visualising data on a traditional 2D plane. Sometimes you’ll get interactive symbology (like what kmls are capable of). I may be reading the article the wrong way, but I don’t quit understand what the focus on VGI has to do with visualisation. Certainly, products like Google Maps allows many users to contribute to a single dataset, thus bringing up problems of semantics when applying tags, but this is hardly a new problem brought about by a new visualisation platform. These sorts of problems have been around since before participatory GIS/VGI, but have only been blown up due to a much larger number of contributors.
The section on tagging and ontology is interesting – but does this affect ‘visualisation’ or analysis and querying? Perhaps the title of the article should not just be ‘geovisualisation’ technologies. When I read the title, I assumed the article would be purely about new methods to display data, and the effects they have on the way we think (perhaps focusing on things like dynamic zooming in products like Google Maps, or displaying of attributes). The use of the word ‘technology’ can be a little limiting at times.

The ‘real’ new technologies of visualisation should be in things like future 3D hologram displays (the real kind, not the stuff with the smoke and lasers) – these are the new forms of visualisation that, when they come to market, will have a real impact on how we choose to display data (such as, how to take into consideration that the audience is no longer viewing from a fixed angle).

The MacEachren and Kraak article is very interesting in the crosscutting research challenges section. They make a very good point that visualisation needs to develop with other areas like interfaces, since the way we interface with the data is also a key part of the experience. I found this article a little more relevant, but it is still at an exploratory stage, so gives some rather vague recommendations at times.

Final though: while visualisation technology is intertwined with other issues of data, interfaces etc., if we don’t just talk about the purely representational part of visualisation technologies, why are we using those two words?

 

-Peck

Accessibility and Geo-visualization

Thursday, February 9th, 2012

The TED talk posted by sah is very interesting and I think it is a perfect example of the exciting developments occurring in GIS and geo-visualization. The example of Bing Maps demonstrates the ways in which different technologies (photography from flikr and street maps) can be combined based on their geographic locations, enveloping the idea of a ‘canvas for applications.’ This video, however, also highlights the challenges associated with geo-visualization, which MacEachren and Kraak discuss in their article.

One of the aspects of the article that appealed to me the most was how MacEachren and Kraak pose the question of whether or not these technologies enable people to think differently about the world. Specifically, their question seeks to understand how creative thinking is impacted by these technologies. For example, a reason Google Earth has revolutionized the mapping world is due to the creation of “slippy maps.” Has this concept of a computer-based map, which displays the world naturalistically, changed the way we see the world? I would argue that it has and I think that the Bing Maps example highlights this well. The ‘mashing-up’ of different applications enables users to make connections that were inconceivable before.

I think that it’s also very important to consider that geo-visualization is always a work in progress—an issue that MacEachren and Kraak’s article exemplifies well—and needs to be supported by researchers. One of the concerns that arises from this development is the accessibility/usability of technology produced as a result of these advances. Interestingly, in a discussion I had about developing an application for mapping the accessibility of Montreal for those with disabilities, many individuals found that “slippy map” applications were very difficult to use. So, while this idea has completely changed the way many use and perceive geographic information, it has also potentially left behind individuals as well, perhaps solidifying a kind of digital divide. MacEachren and Kraak delve into this problem, but I think it cannot be stressed enough how important it is to consider these aspects during this development.

– jeremy

35mm Photos are to Digital Photos as Paper Maps are to GIS

Thursday, February 9th, 2012

I agree with sah. I’m excited about geovisualization! It is truly amazing how maps have become a dynamic user interface! Even when I first started studying Geography several years ago, maps on paper were almost obsolete. On some levels I want to feel nostalgic, as I do for the era of film cameras, but ultimately GIS is far more practical. In his 1965 article titled New Tools for Planning, Britton Harris writes that “so long as the generation and spelling out of plans remain[s] an arduous and slow process, opportunities to compare alternative plans [are] extremely limited” (Harris 1965). Geovisualization and electronic, dynamic databases allow us to be more creative with existing information.

The MacEachren and Kraak article seems to stress the importance of having a universal map that serves many different fields at the same time (like cyberinfrastructure inferred, this hints at the future and the web 3.0, where the machines are doing a lot of the work on their own, catering to the needs of the user without being prompted). This is where I will raise an issue. I agree that it would be nice to have one map to serve multi-disciplinary studies, but at the end of the day, a tool optimized for a specific field will always do a better, more thorough job than a universal tool. For example, the cross-training running shoe is a good shoe for many different exercises. It allows you to have support in many different directions and is a great shoe for the gym, but you don’t see many basketball players wearing cross-trainers. Furthermore you would never consider wearing a soccer cleat on a gym floor. Don’t get me wrong, a cross-trainer is great, but if you want to get the most out of a shoe, you may want to try a shoe that is sport-specific.

Gone are the days of the 35mm film, quality photos and photo albums;  we’re left with millions of self portraited digital Facebook photos… Quality is rare but the options are now limitless, just like the world of GIS and geovisualization.

Andrew GIS

 

Where is the validation?

Thursday, February 9th, 2012

My main qualm concerning geovisualisation is the insane amounts of data that is popping up on the Internet daily, and how people are trying to go about making any sense of it and using it for research (in academia, for use in constructing political policies, generating public knowledge, etc.). Data is gaining increases in complexity and heterogeneity simultaneously as new uses are being found for this data. Kraak and MacEachren outline that geospatial data resources are being used to create visualization tools that enable understanding and recreate knowledge. From my understanding of the article, not many measures are being enacted to ensure the validity of the data and subsequent knowledge it creates. But are they even necessary?

Particularly following the problems of semantic differences in data across users as well as the presence of collaborative sources, data seems to have inherent problems with translatability when it comes to interfaces trying to support individual differences. People view things different ways and at varying scales, and in the realm of geovisualisation where the social is becoming increasingly prominent, how do we account for the differences seen and deem what is “correct”—how can we say what is valid information and what isn’t?

I suppose the answer lies in the problem. With an increasing number of users creating data there is also an increasing number of users checking the data. Interactivity and collaboration allows people to change data—a sort of built-in member checking. Ensuring validity is as great of a responsibility as generating geospatial data in the first place.

Further thoughts: As user generated data is checked by other users, does this infer that the data used to produce knowledge will reflect some sort of regression towards the mean if outliers are eliminated? In a social aspect, will geovisualisation just show the averages in spatial perception?

-sidewalk ballet

What About Privacy in Data?

Thursday, February 9th, 2012

Sarah Elwood posits that rapid change took hold of geospatial technologies over the last five years, with the “emergence of a wide array of new technologies that enable an ever-expanding range of individuals and social groups to create and disseminate maps and spatial data” (256). Elwood does an admirable job of fielding some of the pros and cons that stem from this revolution in technology. In particular, she covers changing power relationships as new groups are empowered by creating data, the possible limitations of existing spatial data models and analytical operations, and how problems with the heterogeneity of the data might make it difficult to support across users or platforms (interoperability).

However, her most important alarm bell, I believe, comes when she writes “that the growing ubiquity of geo-enabled devices and the ‘crowd sourcing’ of spatial information supported by Google Maps fuels exponential growth in digital data, and growing availability of data about everyday phenomena that have never been available digitally, nor from so many peoples and places” (257). What happens when governments use this data to spy on citizens or when individuals use this data for the wrong purposes? The United States government clearly has no compunction about monitoring its own citizens (if you follow recent politics there). Elwood, herself, pays short shrift to what this might mean for the privacy of users and, even, just the public caught up in “everyday phenomena.” She notes that some scholars have raised the question of whether or not the rise of these technologies constitute new forms of “surveillance, exclusion and erosion of privacy” (257) but quickly moves on to the exciting promise of these technologies.

In particular, Elwood appears enamored of the potential of these technologies to reveal new social and political truths (261). Yet, as we noted in our IPhone conversation in class, these technologies might be used inappropriately to track us without our knowledge. Individuals in a democratic society have an undeniable right to privacy, but how can they use these new technologies and software and still be sure that their privacy is respected and their data remains anonymous (if needed)? Should some type of system or regulations be put in place to ensure this right? Something like this has been tried in Europe, but what are the lessons? I’m not sure.

–ClimateNYC