Archive for the ‘506’ Category

Constant Monitoring and Temporal GIS

Thursday, March 22nd, 2012

Prior to reading Langran and Chrisman’s article, my understanding of temporal geographic information was limited to time lapse videos of static snapshots, which visually display change. After reading the article, however, I was able to better comprehend the importance of temporal models that enable direct comparisons of how objects are changing. Models such as these allow for a greater understanding of how change is occurring at specific times, whereas snapshots seem to merely illustrate the general notion of change.

Outdoor Addict questions how decisions are made as to what constitutes an event and I think this is a valid concern. In abiding by data storage limitations, for example, we may deem a change to be irrelevant and discard it. However, what if this change is considered to be important at a future date? Perhaps the idea of examining snapshots is still holding me back—certain technologies that allow changes to be constantly tracked might need to be considered to a greater degree. In thinking about this, a parallel may lie in a Geography 407 class discussion, where dialogue revolved around sensors designed to continually track animals in forests. In this case, every motion that is detected is recorded, while durations of no motion are not. Can anyone else think of similar examples?

From the motion detection example, yet another concern arises—there will inevitably be information that sensors cannot detect. In addition, relating to the lecture on scale, the issue is not merely about deciding what objects to include, as previously mentioned, but it is also about determining what level of detail is appropriate. In other words, there is such a thing as too much information. The easy way out would be to yet again rely on the “future technological advances will render this concern irrelevant,” argument, but due to the inescapability of uncertainty, I posit that context and judgement are two of the most important considerations.

Lastly, in answering CimateNYC’s question about the distinction between real time and database time with regards to streaming data, Madskiier_JWong states that information may be incorrect or incomplete and in need of updating at a future date. I would like to add to this that technical issues often arise when dealing with streaming of data. For example, glitches in communication systems or backlogs of data can result in differences between real time and database time. This type of information is valuable, however, as it enables insight into how systems can be better designed.

– jeremy

HCI, GIS and the Community

Thursday, March 22nd, 2012

The user-centered design proposed by Haklay and Tobon (2003) is very close to my own topic of critical GIS in that both topics recognize humans are an important factor in how GIS will be used and valued. Thus, historical, cultural and social aspects play a crucial part in the adoption and success of GIS. I especially appreciate how the article highlighted the fact that the improvement of GIS requires an “iterative process” between the tool and users (society). To maximize the practical usefulness of GIS, researchers must keep in mind that a “good” computer program cannot be judged solely on the number or the complexity of the application but rather on how “usable, safe, and functional” (579) the application is to users.

 

It makes logical sense that HCI research picked up momentum in the 1980s when personal computers became more affordable. However, I wonder what are differences between humans-computer interaction and human-computer interaction. Or in other words, between how groups interact with technology and how individuals do. Perhaps, decisions that involve a group of people (often in PPGIS), users may tend to listen to the one person with the most “expertise” and disregard their own knowledge of the application. The workshops described in the paper involved a user, a facilitator, and a “chauffeur”. I wonder if people would have interacted differently with the application if they were allowed “free-play” on their own after a short demo of the basic tasks.

 

Furthermore, I think we should carefully consider what tradeoffs are involved between usability and functionality. By making an application more intuitive and easier to use, are we losing important functions that should be included despite its complexity? Ultimately, this judgment depends on the set of tasks intended to be carried out by the application. However, they are not always easy to predict. For the purpose of planning, shortest path analysis may be extremely insightful, although the results may be difficult to intercept given all the assumptions that goes into the analysis. Moreover, uncertainty will definitely be another tricky area to convey. Therefore, one challenge is to figure out which types of tools should be included in a GIS for naïve users so that the system is both not limiting and not overwhelming.

 

Finally, the article made be think about the potential backlash of some HCI research. For example, I can imagine that disadvantaged communities may not want the results from the workshops to be published due to, perhaps, the misguided belief that making the system easier to use is equivalent to “dumbing it down” and the negative social stigma that follows. Therefore, the decision for including an opting-in or an opting-out option in HCI research is a sensitive one since this option will have dramatic effects on the number of participants. Personally, I favor having the initial settings to automatically include users in the study because although most people probably want to help and improve on a system that they use, the hassle of opting-in is enough to deter most people to becoming participants. However, due to privacy issues mentioned by the author, the application should explicitly warn users of this option before they can start using the application.

Ally_Nash

“Take another picture! They added a fire hydrant!” and The Need to Go Digital

Thursday, March 22nd, 2012

Temporal GIS absolutely fascinated me once I found out what it was through this paper. The idea that spatial principles can be applied to time interests me as it signals to me that my spatial information knowledge has an additional use. The descriptions of each “image of cartographic time” were extremely helpful in visualizing precisely what the authors were trying to explain.

However, for each method of thinking about geographic temporality, events or mutations are needed. Langran and Chrisman describe a mutation as “an event that causes a new map state” and “a point that terminates [a] condition and begins the next”. It theory this makes sense. In the real world what qualifies as a mutation or event? Take for instance a map of a suburb’s development. The first version may only have a few houses. The next might have new houses, new streets and a new school and the following one might show a new fire hydrant as the only change. At what point in time does the map need to be updated? What event is considered significant enough to warrant making an update to the database? Additionally, who decides this? Perhaps it might be similar to the argument on ontologies as it could be a subject specific database where particular changes are more closely followed than others. A fire department may be far more interested in updates concerning each fire hydrant than a family which may be more concerned about where the nearest park is located. Furthermore, is technology advanced sufficiently to be able to determine this on its own once parameters are set or is this a manual job? (For example, could a satellite constantly taking pictures of the suburb be programmed to recognize when 5 new houses are completed and automatically update the database to which it is connected?)

On a slightly different note I would like to emphasize the importance of going digital for temporal GIS. The authors only point out that their work focuses on “digital methods of storing and manipulating sequent states of geographic information” but neglect to explain why this is so important. Much like geolibraries, the concepts and theory to operate and organize them may have been present may years ago (this paper dates to 1988 while geolibraries date to 1998) but the technology did not exist to bring them to the digital world and make them practical, useful tools and studies. For the many reasons discussed for promoting digital libraries in addition to the nature of spatiotemporal information, digital is the only way to move forward.

-Outdoor Addict

Time and GIS

Wednesday, March 21st, 2012

We’ve heard how the cyberinfrastructure handles temporal and spatial data separately, but must be developed in such a manner that allows for users/researchers to utilize both sets of variables when interacting with a GISystem. Now Gail Langran and Nicholas Chrisman provide an interesting overview of the topological similarities between time and space, and how best to design a GIS system which can accurately display temporal elements.

I find the authors’ notions of time and its important elements to be a overly simple in a way that helps to lend credence to their subject. In particular, they characterize cartographic time as “punctuated by ‘events,’ or changes” (4). Furthermore, they do a nice job contrasting GIS algorithms based on questions concerning space (what are its neighbors? what are its boundaries? what encloses it? what does it enclose?) with the similar questions one might ask for time (what was the previous state or version? what has changed? what is the periodicity of change? what trends are evident) (7). Such examples help to define this paper not just as a discussion of temporal data, but also of temporal data based closely to its application in geographic space. Such an added dimension can be incredibly important when we begin to think about all of the geographic phenomenon that occur over differing timelines. It’s also an element we should try to remember more in our own research efforts.

I do wonder about the distinction the authors draw between real world time and database time. Since many GIS databases are headed toward real time, streaming data – as was pointed out in previous lectures – why make this distinction? Perhaps I’m not technically inclined enough to understand the importance of the difference in programming or maybe it’s just a matter of how the system might store information. Anyone have thoughts on why real time data can’t be used in a manner that equates it to database time?
–ClimateNYC

Human-Computer Interfaces and Identifying User Groups

Wednesday, March 21st, 2012

Haklay and Tobon stress the need to design both software and hardware that is most convenient for an identified user group and their goals. Cases such as Braille displays for computers are clear examples of a positive, improved human-computer interface. However, when applied to a complex analytical field such as GIS, HCI studies run into the issue of defining the user group. There is no immediate common attribute shared amongst all GIS users unlike the condition of being blind for Braille-users.

The authors are instructive when they indicate that identified difficulties with GIS are more human-based rather than technology-based. The challenge with users of GIS is that experience with the software varies wildly, and some may be unaware that they are a part of a GIS analysis (as a source of information or doing it themselves). It is tempting to simplify displays to extend the potential audience of GIS, and we have seen this in many GeoWeb 2.0 apps and platforms such as Geocommons. Geocommons’ site boasts that users can “Easily create rich interactive visualizations to solve problems without any experience using traditional mapping tools”. Web 3.0 continues in this direction by offering semantic searches and increasingly mobile applications and devices. In the drive to eliminate the distance between computers and humans however, it becomes easier for others to manipulate and use naïve users as data sources. The increased digitization of our world has sparked debates about privacy and perceived privacy issues.

A question HCI studies could ask then is how to best segment GIS users into groups. Can the semantic intelligence of Web 3.0 be used to map common thought processes/links to better identify common goals? This understanding can be used to direct intermediate GIS users to resources that explain the basics of analytic functions, while underweighting papers that describe advanced processes (I frequently ran into advanced “Petri Net” algorithms when searching for the general history of temporal GIS). Are there alternatives to segmenting users by “skill”, which is a vague measure and largely prescriptive?  

-Madskiier_JWong

HCI, Cognition, Systems and Designing Better GIS

Wednesday, March 21st, 2012

Mordechai Haklay and Carolina Tobon provide an interesting overview of the use of GIS by non-experts, with a good focus on how public participation in GIS continues to shape the actual GIS systems in a manner that makes them more accessible and easy to use. In particular, I find their section on the workshops they conducted (582-588) to evaluate the usability of a systems pretty interesting, especially the authors work testing the London Borough of Wandsworth’s new platform. In particular, findings on the need to integrate aerial photos for less sophisticated map users and the need for the system to give feedback to users to confirm they had completed a task struck me as simple, intuitive adjustments many systems leave out. Of course, something as simple as feedback to confirm a task may seem like an obvious part to be included in any system, but I can think of a great many online programs and forms which fail to do this and often leave me wondering if my work/response has been saved.

One of the more interesting aspects of the topic of human-computer interaction, for me, when thinking about it in terms of GIS, includes the way it sits at the intersection of geospatial cognition and geospatial cyberinfrastructure. Perhaps I am biased by my own interests, but this topic pulls these two previous ideas from our class together nicely, as it relies on both to make many of its most salient points. However, one question I had, after reading this paper and discussing cognition in class, remains how do we test geospatial cognition in such a manner that we can apply our findings to better systems design. Often, the field of geospatial cognition seems more obsessed with exploring the ways in which humans understand space and engage in way-finding behavior. I’d be interested in seeing articles/research that really digs into actually applying psychological findings to systems design in a manner that goes beyond the testing these authors have done. I should say they do a nice job, though, of summarizing the theory of how cognitive processes like “issues such as perception, attention, memory, learning and problem solving and [] can influence computer interface and design” (569). Yet I don’t see these concepts applied directly in their testing – perhaps it’s just not covered extensively.

I think it’s only in this way that we can truly bridge the gap between humans and computers. Or is it, humans and networks of computers? Or humans and the cloud? Or humans and the manner in which computers visualize data, represent scale and provide information about the levels of uncertainty? As one might conjecture, the topic of human/computer interaction may be limitless depending on what angle we approach it from.
–ClimateNYC

Coarse grained data issues low resource settings

Friday, March 16th, 2012

Despite Goodchild et al.’s (1998) article’s technical components, the article did make me think of uncertainty regarding boundaries and course grained satellite imagery. Exploring low resource settings on Google Earth is one such example. Although an incomplete geolibrary, I consider Google Earth to be effective in its user friendly interface and features (layers and photographs), and of course, ubiquity. It’s a start. With this in mind, ‘flying’ over towns in Colombia on Google Earth, and the terrible, terrible satellite imagery that was available. (The low quality imagery remains unchanged since the last time I checked it half a year ago). One of the towns/districts is Puerto Gaitan. How do we account for the lack of resources given to collecting fine grained even medium grained visualizations?

According to Goodchild et al., alternative methods for displaying fuzzy regions must be applied where cartographic techniques are not enough. “A dashed region boundary would be easy to draw, but it would not communicate the amount of positional uncertainty or anything about the form of the z(X) surface” (208). What do we do then, when the data cannot even be analyzed because it is too coarse? For low resource settings, we are just going back to where we started. No financial incentives to improve data (from coarse to fine) = continuation of coarse grained data = poor visualization = cannot be utilized in studies = no advancements in research are made = back to the start, no financial incentives to improve the quality of data. How do we break this cycle?

-henry miller

Footprints and priorities

Friday, March 16th, 2012

Goodchild’s (1998) ‘Geolibrary’ chapter is a great introduction to the geolibrary field and the challenges it poses. However, it should be noted that it was published 14 years ago, which may mean that some of the questions raised have already been answered, while others still remain problematic, and further, new questions are anticipated. In particular, geographical footprints have become more complex in search queries. “But the current generation of search engines, exemplified by Alta Vista or Yahoo, are limited to the detection and indexing of key words in text. By offering a new paradigm for search, based on geographic location, the geolibrary might provide a powerful new way of retrieving information” (2). Now that we have Google as the most used search engine, I agree with Jeremy regarding his reference to Google Maps and searches related to businesses. I believe it is a type of geolibrary, although the economic and legal issues that Goodchild poses come to mind (8). As Google as a business the payment for its maintenance and the legal rights it holds become convoluted and at times questionable to the users. Would open-source map applications such as OpenStreetMap be more appropriate to manage financial and legal issues with fewer controversies?

Geolibrary footprints continue to be interesting due to its ability to enhance or hinder the amount of sources a user is exposed to. The more in the vicinity a user is to the specific location they are researching and want to extensively explore the database of a particular geolibrary, the more information that individual will find. This can be problematic for remote researchers that are constrained to a geographical location, and at a great distance from their research study area. This can have serious implications on the research conducted as the way the research unfolds drastically alter based on the amount of sources available. In a sense, it is stifling the global aspect of geolibraries as a plethora of sources about a location is still only available in the proximity of the location in question. As the questions a geolibrary can answer revolve around area, geographical footprints can play a significant role to diminish uneven distribution of place related information in a digital form.

-henry miller

Geolibrary implementation

Friday, March 16th, 2012

In the chapter 5 section by Goodchild, he goes over how a geolibrary would work conceptually. However, I think there are some problems already brought up in his description. I don’t think that having server catalogues where a server is in the possession of one (or multiple) specialised collection is necessarily a good idea. It doesn’t sound very efficient to me. The thing about data is that there isn’t equal demand for it everywhere. Some data is demanded more than others (more people search for the weather at a given location than say, the demographic composition of it). Therefore, if you restrict servers to having only a certain kind of specialised content, you would not be optimising your server loads. Instead, you’d get a bunch of servers that have very low traffic that isn’t cost effective, and maybe more servers for which traffic is so high that you need to expand them. I just got the feeling that the suggestion was to model the hardware layout after the layout of the data, but this is just going to be inefficient. It means more connections have to be made, maybe more servers (and the extra costs associated with that), and probably slow performance.

The kind of system being described doesn’t sound very future-proofed. It’s like our ontology discussion. What happens when a new category is created? What happens when a sub-category becomes more important or separated from it’s original category?

 

In the article on fuzzy spatial queries, I think it is easy to say that we need to be able to form a method of querying that can incorporate both defined an badly defined regions, but how could this ever be compatible with the chapter 5 description of how the library would actually work? I think that ill-defined regions are just that, and I think the best we can do is just use a search engine. If we try to structure it (as we must do if we want specialised servers), then we run into all sorts of problems I think you all have an idea about.

Finally, I’m getting the feeling that we are meant to start our queries in a geolibrary with a location (well defined or not). It seems that geolibraries would be tailored for a certain flow of querying (location > topic > sub-topic > person etc.). What if we don’t want to start our query with a location?

 

He’s talking about AltaVista….AltaVista people. Remember them? No? That’s because they’re dead.

-Peck

A Tangent from Fuzzy Footprints…

Thursday, March 15th, 2012

Goodchild’s (somewhat uncoordinated) introduction to Fuzzy Footprints got me thinking, once again, back to ontologies–as has been mentioned by many others posting not only on this topic, but on many topics we have covered in class this semester.  So it brought me back to another question asked in class, again with regards to multiple topics: how important is geo-education?  And so here I would argue: VERY important.

Uncertainty can be largely down to our ability (or lack thereof) to communicate, and to understand what has been communicated by others.  Boundaries, locations, and our ability to define them are essential to geolibraries.  If we cannot come to general understandings, there will constantly be error.  Before in class I was not convinced that education (about scale, particularly, but about various geographic phenomena) should be made explicit (outside of a geography class).  Now, I believe otherwise–how could I not after repeating topic after topic that ontologies (and thus understanding) is important?

To create a global database of georeferenced information is a magnificent endeavour.  To create a global database of georeferenced information that can be efficiently searched by any member of the global community is a whole new ballgame, and must necessarily involve a renewed goal of educating the public and of coming to shared understandings (both on areas of agreement and disagreement).

sah

Geopedia?

Thursday, March 15th, 2012

Imagine the first decade of the 2000s, the Internet well-established, and the endless possibilities beginning to emerge in full.  The possibilities for data sharing are immense.  And then, on January 15th, 2001, it manifested in what is today recognized as an extraordinary project: Wikipedia.  Today we know Wikipedia as an oft-reliable source, and while further references are always ideal, it is the perfect starting point for mining the internet of the immense amounts of data it has, as well as exploiting the research already done by many others in the global commons.

This is what came to mind when reading Goodchild’s introduction to Geolibraries.  While I first thought of Google Earth, and the basemap it provides upon which to place georeferenced data, the further I read into his overview, the more I thought of Wikipedia, and how this platform seems to be a perfect way to bring the idea of geolibraries to reality.  To further elucidate, I will go through a couple of Goodchild’s main questions at the end of his article.

First, he asks about intellectual property rights.  Obviously, geolibraries will contain information that is more than just “fact” (in as much as things on Wikipedia are fact), such as musical pieces, building plans, etc that may not in fact be property of the global common.  Perhaps copyright as is applied on other internet sharing sites such as Flickr could be a good start–is something a part of the global commons, is it licensed for creative use, or is it 100% copyrighted?

Goodchild also asks about the “infrastructure” of a geolibrary, as well as the economic feasibility.  This too could be modelled from Wikipedia–a veritable container of a plethora of information, pictures, sound clips, and more.  Wikipedia is founded by the Wikimedia Foundation, a non-for-profit charitable organization–perhaps this is the route geolibraries must take: an endeavour to be undertaken by those passionate about Georeferenced information?

Finally, I would like to address the question of metadata.  Goodchild asks how much metadata we need, how it should be catalogued, and elsewhere in the article, he speaks briefly again to a users own cognition.  I believe with a “Global Commons” type of platform, like Wikipedia, there will be a lot of metadata, that can be edited continuously by multiple people and perspectives with the hopes of finding a neutral ground.

Obviously there are a lot of ways Wikipedia isn’t directly amenable to becoming a Geolibrary, but this is, in my opinion, an interesting model to start from–going from paper encyclopedias in physical libraries to online catalogues of information.

sah

Where are all the geolibraries?

Thursday, March 15th, 2012

Chapter five by Goodchild provides a good overview to geolibraries, their importance and the components that goes into constructing them. He highlights the difference between geolibraries and traditional physical libraries. Namely, that geolibraries would be more compatible to deal with multimedia content, hold more local information and avoid the issue of duplication. I think today the separation between the digital and traditional library have become much less distinct. Online catalogues allow users to search many libraries at the same time and thus address the problem of duplication to some degree. As more books and other materials are digitized, a user no longer needs to go to the library to get the material he/she needs. I often download newspapers, articles, and magazines from the McGill library. Also we all frequently download maps and GIS data off the Internet especially since many cities have established open data portals. Thus, I argue, the key feature that will make geolibary special is not how comparable it is to physical libraries but how it allows users to discover various topics about a location of choice.

Further, I wonder why geolibraries have not become very popular since 1998 because this is the first time I have heard about them. Maybe it has to do with the 4th research question Goodchild asks: “What institutional structures would be needed by a geolibrary? What organizations might take a lead in its development?” I would also like to add, what kind of personnel training and organizational shift in the ways things gets done are required by current governmental structures to enable the adoption of geolibraries? There is definitely inertia within public office structures that is often difficult to overcome when introducing new technology. Finally, I would like to consider who should be responsible for the data and the limitations of the kind of data offered. I remember Peter telling me that one of the reasons why governments are hesitant to make data public was if the data contained a mistake about an area, who should be held responsibility for the damages? The one who collected the data? The one who entered the metadata? The geolibrary for providing bad data? Also, what kind of limitations should be set on the kind of data downloadable through a geolibrary. For example, restrictions should exist on data that are highly political such as health data or high-resolution environment data.

Ally_Nash

Soft Boundaries, Scale and Geolibraries

Thursday, March 15th, 2012

The article by Goodchild et al (1998) mainly dealt with finding a way to figure out to what degree a footprint conceived by the users matches with one that exists in the geolibrary. The difficulty is how to include ill-defined areas into the gazetteer since their boundaries are not precise yet they hold significance in people’s lives. The author sums it up nicely by declaring “effective digital libraries will need to decouple these 2 issues of official recognition and ability to search, by making it possible for users to construct queries for both ill-defined and well defined regions, and for librarians to build catalog entries for data sets about ill-defined regions.” (207). I agree with ClimateNYC. This was the exact problem for researchers building landscape ontology and displaying features that have “gradual” boundaries such as towns, beaches, forests and mountains. Field representations seem a viable option. However, if a neighborhood, for example, have a range of “soft” boundaries, I would argue in favor of having one of the more inclusive one (so that a point that is considered only 30% to be part of Area A will also be included in the query) be taken into consideration by the gazetteer and thus giving the user the opportunity to filter through the data himself.
Hierarchical nature of space is also an interesting topic raised by the authors. Should a search for Quebec also return datasets about Montreal? In addition to listing all well- and ill-defined places, it might also be favorable to separate the datasets into relevant scales. For instance, a user querying Quebec (or even Eastern Canada) is most likely looking for datasets at smaller (cartographic) scales than someone who is querying Montreal. For instance, a search for Eastern Canada in the ADL brought me directly to Fredericton when I would be expecting the whole area between Quebec and Newfoundland. Returning data at the wrong scale would be very inappropriate.

Ally_Nash

Geo-libraries, skyscrapers, and regional bias

Thursday, March 15th, 2012

I find the topic of geo-libraries fascinating, particularly because of the incredible potential I think that they have in conveying ideas. It is a very powerful way of organizing information that allows for visual comparisons to be made. For example, related information that may be of use to a user may be more easily suggested or discovered, as information that is geographically related can displayed. The importance of geo-libraries in practice is also backed up, as the majority of map library users rank locational characteristics as the primary search key.

Since I’m a bit of a building/architecture nerd, I think the following is an interesting example of a geo-library:

http://skyscraperpage.com/cities/maps/

Perhaps this is a very simplified version of a geo-library, but it shows how an individual can search for information on proposed, under-construction, and/or competed buildings by selecting a geographic region. As one can see from the map, very useful visual information can be gained through pattern recognition. Clusterings of under construction buildings, for example, are easy to find.

While I think that this is an extremely useful tool, my example is one that may be much more straightforward than those discussed in Goodchild’s article. Most of the cities listed, for example, appear to have well-defined, uncontested boundaries (or perhaps they just appear that way?). Further, the uncertainty present in the information available for each building (location, type, developer, floor count) also seems to be relatively low.

Even though this example may be simplistic, I think that it points to the potential for geo-libraries to have a regional bias. By examining the list of cities available to search, for example, it is clear that this architecture website has a North American focus. However, perhaps technology will enable users to overcome this bias. An example provided by Goodchild is the use of geo-library ‘crawlers,’ which search the web looking for key terms based on geography. Despite their potential, these technologies also bring with them a variety of other problems. For instance, as mentioned by ClimateNYC, the issue of ontologies arises, where incorporating varying opinions and definitions proves to be troublesome.

Perhaps someone can answer this in their blog post, but I am uncertain as to the various forms geo-libraries can take. For example, since users can search for related businesses within a Google Maps map frame, would this be considered a type of geo-library?

– jeremy

 

Geo-libraries and tourism

Thursday, March 15th, 2012

In a geography of Asia class I took last semester, possible ways for individuals to engage in responsible tourism was discussed. Most often, much research was required in order to educating oneself on local conditions or current events. However, sifting through the biased perspectives of local media or governments can be a challenging activity. Especially when trying to understand how tourism impacts marginalized communities, understanding this can be an important factor. I think that a geo-library can facilitate learning in this case, where tourists can query and access information depending on what region they are in. The option of seeking out academic articles can enable people to gain scholarly perspectives that may more accurately represent local conditions.

As the article by Goodchild et al. mentions, however, an issue arises when footprints or terms of searches are ill-defined. With regards to tourism, it would seem that this problem would be magnified in remote areas. For example, as we know from our discussion on ontologies, defining objects is very challenging, especially when multiple languages are being considered. Further, less information is available for remote areas in general and so while geo-libraries may be extremely enabling in many aspects, they may also be limiting for areas not well-represented.

Perhaps an interesting comparison is likening a geo-library to a mental map. While the regions that I best understand will likely have the most amount of detail, it will not include important elements belonging to another individual’s mental map. As Goodchild et al. posit, objects with ill-defined terms that are not well-represented in a geo-library can also be of great significance to the lives of individuals at a local level. In other words, while a place name or building may not appear in a geo-library query (or my mental map), it may still be very relevant to many people. Determining how to incorporate under-represented features will be a challenging, but crucial issue in the development of geo-libraries.

– jeremy

Blickr: Flickr for books!

Thursday, March 15th, 2012

Goodchild touches on a few interesting thoughts in the second article. I feel however, that this article is outdated. He refers to being able to access information across a network as something almost magical. He says something along the lines of a georeferenced library will have the ability to serve people across the globe using digital copies! On a similar note Goodchild also mentions the idea of not needing to duplicate material—an idea that we seem to be unable to escape from in each of our lectures.

Something that I found quite relevant however, is the sorting and cataloguing of photographs . The geolibrary offers a much more concrete system of organization. I really love this idea. Prior to the concept of a geolibrary I can only assume that if photographs were not assembled in a portfolio, compilation with a specific topic or in published book, it might be hard to find photos. Even within a publication, it seems like a tedious task to track down a photo that is most probably untitled and not georeferenced. I think that Flickr currently does a decent job at this task, but this database is limited to photographs only. As a user you can label your photos, add description and even add georeferenced data to help other users search for photos, as they might search for academic articles. The collection of photos, by user, visualized on a map is stimulating. Not quite as useful as I proposed in my other blog post, but still pretty cool.

I found this article somewhat repetitive, and perhaps unnecessary, but did address some fundamental reasons for and problems with geolibraries. I’m eager to see how the development of geolibraries evolves. It is perhaps, one of my more favourite concepts with respect to GIScience.

Andrew

Geolibraries simplifying future academic research

Thursday, March 15th, 2012

Goodchild suggests in Fuzzy Spatial Queries in Digital Spatial Data Libraries that the lat/long coordinate systems should only be used for areas that are lacking place names and named features. (Firstly I would argue that there are very few nomads academically publishing works from the Sahara), but more importantly, the issue of ontologies and standardization of labels arises once again. An article written in, let’s say Japan about Italy will have a completely different label than one in Canada, written about Italy. An English person would reference his or her work as a topic in Italy, while a Japanese academic would write that their subject also occurs in ????. If Goodchild is planning on writing a program, or interface, I will suggest that he use a coordinate system, and have his program group and aggregate the location of topics or footprints based on these coordinates. How does Goodchild plan to deal with international, multi-lingual academic publications?

Goodchild also poses the idea of searching by area. He suggests that we should be able to search by more than just topic and author; we should search by place as well. I think that the user should be able to search by region of interest (of the topic), region of origin, or both. If both origin and subject are georeferenced I see the possibility to create something more dynamic than this simple query .What if, in a Google Earth-like interface, we could also offer a visualized network (as we can visualize the flight-paths of commercial airplanes) of who the author has cited in a specific paper, and in another search criteria visualize what other articles have cited the article in return. Instead of rifling through Bibliographies and Works Cited pages, one (or two) simple click(s), could potentially visualizes all related articles on a map. Research simplified!

Andrew

Fuzzy Spatial Data Queries and What It Means for Government

Thursday, March 15th, 2012

To be honest, I’d been at a loss for what to say differently about the second Goodchild et. al. article that I didn’t already say in regard to his book chapter. Then I began to think about Cyberinfrastructure’s post and his ideas about how uncertainty in spatial data queries can be determined by different types of scale (query scale, the segmentation scale, the data analysis scale, visualization scale) and how this problem can change with different levels of scale in and differing levels of uncertainty. Yet boiling the concept of this article down just to these abstract concepts didn’t help me in thinking about where this problem really matters – a matter Goodchild is concerned with when he talks about users of these data libraries.

So, I turned to YouTube. Don’t worry about watching the whole video.

As this video shows, users such as the government utilize geospatial data libraries quite frequently with a whole plethora of new uses. This form of uncertainty can really hinder efforts to modernize government and provide new services (impacting users like you and me).

An example from the film, when a dispatcher uses “On-Demand Pick-up” to dispatch a driver to someone who needs a ride, they better be sure their computer is picking up the same neighborhood as the caller is requesting from. If not, then they could be sending a driver from too far away to pick the person up. But how does this dispatcher get the caller to define a concrete place rather than abstract, vernacular-defined place name? It may seem just a simple question of language and communication skills. Perhaps it is.

But take another example from the film, where city administrators are able to provide real-time information to bus riders on the location of buses. How do they know what scale to provide this information to bus riders at? What if the user requires two bus lines to get where they are going? What happens if this data isn’t provided at a large enough scale to understand the placement of buses? A moot point, perhaps, as the bus will come when it comes. But certainly an important question for the people applying these GIS systems which rely on data libraries about the geographic areas where they operate. This becomes even more important when you think about technology such as LIDAR that operates at even larger scales and the methods used to define such scales of operation.
–ClimateNYC

Gazetteer Issues and Interoperability

Thursday, March 15th, 2012

I’ve been thinking about “cyberinfrastructure” and “madskiier’s” posts discussing the difficulties of incorporating an appropriate “gazetteer” function with geolibraries and how this function might need to change rapidly online. However, I think we are also facing a much greater challenge that harkens back to our lecture on ontologies. A huge question that I have about this idea of a geolibrary stems from the various definitions different cultures might have for differing types of geographic features such as mountains. How we define such features and their boundaries will be an essential question going forward for any researchers looking into how to create a comprehensive geolibrary that can cross cultural, political and physical boundaries.

Michael F. Goodchild hints at this in Chapter 5 when he poses a number of possible research questions. In particular, I’m interested in his questions about how much and what kinds of metadata are needed to support a geolibrary (Question 5, Page 8 ) and what are the cognitive problems associated with using geolibraries (Question 7, Page 8). One of the keys to making a geolibrary useful and operable across the boundaries I mention above may be figuring out how to set standards for the data, and in supplying lots of useful information about the data. Metadata could serve this purpose, but, as Goodchild notes, the question may more be one of what system do we use to organize this data so that it maintains it’s usefulness and interoperability. Here, of course, you get deeper down the rabbit hole and have to begin thinking about who is going to host the geolibrary, what kind of infrastructure it requires, and, then, what kinds of systems it can support and where the data will come from.

This seems like a much broader question than just how do we search by place names or incorporate functionalities and parameters into such searches that can help diverse sets of users. Rather, I think we are beginning to ask fundamental questions of how do we define different parts of a geolibraries platform, it’s data types and the ways in which we interact with these data types. Questions of ontology and organization.  Interestingly, there is PhD student at Xavier University who has been thinking about these questions in terms of digital libraries too – and he writes that “[Digital libraries] also pose technical and organizational interoperability challenges that must be resolved.” Find more from him here.

–ClimateNYC

The Alexandria Digital Library and Geolibraries

Wednesday, March 14th, 2012

Cyberinfrastructure’s post on the design of gazetteers sparked an investigative light. Taking a test run on the Alexandria Digital Library, a search for “Northern Canada” focuses in on the majority of Newfoundland. Immediately, as a GIS user with some appreciation of its underlying principles, I was struck by the lack of transparency and feedback on how the geolibrary decided on Northern Canada as referring to Newfoundland. I tried typing in the name of several towns in the territories, northern Ontario and BC to establish a search history that pointed towards my definition of northern Ontario (to see if the engine would contextualize future results based on previous ones). No such luck.

The service seems much more oriented towards the delivery of data and is more reticent towards acknowledging the uncertainty of the gazetteer and how people have different ideas of the extents of places. Cyberinfrastructure’s post alludes to a growing and increasing diversity of users to match the need for constantly updated gazetteers. As the number of users and their purposes for the data grow, it becomes increasingly important to explicitly discuss the (compound) uncertainty in the datasets provided.

Based on initial impressions, a rudimentary system can be developed that allows for more flexible, user-defined boundaries. If a user defines an area that overlaps two formalized regions in the gazetteer, the system could return datasets for both regions involved. More control needs to be given to the user to understand how things are geolocated within the ADL and to include more contextual fields that tailor searches to the user’s preference. These issues are clearly in line with Goodchild’s 7th research concern on the cultural barriers inhibiting cognitive understanding of geolibraries. Presently, the ADL seems designed for planners that follow legal, administrative boundaries rather than for the general public user. Even simple contextual algorithms that Google uses in its search engine could be implemented to enhance the querying experience in ADL.

– Madskiier_JWong