Archive for the ‘geographic information systems’ Category

Visualisation technology in its broadest sense

Thursday, February 9th, 2012

Elwood mentions many technologies/applications but seems to focus on geoweb and VGI. However, these are hardly the only interesting and new developments in visualisation. Wiki maps, Google Maps, and other internet based mapping tools all do the same thing – they work on visualising data on a traditional 2D plane. Sometimes you’ll get interactive symbology (like what kmls are capable of). I may be reading the article the wrong way, but I don’t quit understand what the focus on VGI has to do with visualisation. Certainly, products like Google Maps allows many users to contribute to a single dataset, thus bringing up problems of semantics when applying tags, but this is hardly a new problem brought about by a new visualisation platform. These sorts of problems have been around since before participatory GIS/VGI, but have only been blown up due to a much larger number of contributors.
The section on tagging and ontology is interesting – but does this affect ‘visualisation’ or analysis and querying? Perhaps the title of the article should not just be ‘geovisualisation’ technologies. When I read the title, I assumed the article would be purely about new methods to display data, and the effects they have on the way we think (perhaps focusing on things like dynamic zooming in products like Google Maps, or displaying of attributes). The use of the word ‘technology’ can be a little limiting at times.

The ‘real’ new technologies of visualisation should be in things like future 3D hologram displays (the real kind, not the stuff with the smoke and lasers) – these are the new forms of visualisation that, when they come to market, will have a real impact on how we choose to display data (such as, how to take into consideration that the audience is no longer viewing from a fixed angle).

The MacEachren and Kraak article is very interesting in the crosscutting research challenges section. They make a very good point that visualisation needs to develop with other areas like interfaces, since the way we interface with the data is also a key part of the experience. I found this article a little more relevant, but it is still at an exploratory stage, so gives some rather vague recommendations at times.

Final though: while visualisation technology is intertwined with other issues of data, interfaces etc., if we don’t just talk about the purely representational part of visualisation technologies, why are we using those two words?

 

-Peck

Accessibility and Geo-visualization

Thursday, February 9th, 2012

The TED talk posted by sah is very interesting and I think it is a perfect example of the exciting developments occurring in GIS and geo-visualization. The example of Bing Maps demonstrates the ways in which different technologies (photography from flikr and street maps) can be combined based on their geographic locations, enveloping the idea of a ‘canvas for applications.’ This video, however, also highlights the challenges associated with geo-visualization, which MacEachren and Kraak discuss in their article.

One of the aspects of the article that appealed to me the most was how MacEachren and Kraak pose the question of whether or not these technologies enable people to think differently about the world. Specifically, their question seeks to understand how creative thinking is impacted by these technologies. For example, a reason Google Earth has revolutionized the mapping world is due to the creation of “slippy maps.” Has this concept of a computer-based map, which displays the world naturalistically, changed the way we see the world? I would argue that it has and I think that the Bing Maps example highlights this well. The ‘mashing-up’ of different applications enables users to make connections that were inconceivable before.

I think that it’s also very important to consider that geo-visualization is always a work in progress—an issue that MacEachren and Kraak’s article exemplifies well—and needs to be supported by researchers. One of the concerns that arises from this development is the accessibility/usability of technology produced as a result of these advances. Interestingly, in a discussion I had about developing an application for mapping the accessibility of Montreal for those with disabilities, many individuals found that “slippy map” applications were very difficult to use. So, while this idea has completely changed the way many use and perceive geographic information, it has also potentially left behind individuals as well, perhaps solidifying a kind of digital divide. MacEachren and Kraak delve into this problem, but I think it cannot be stressed enough how important it is to consider these aspects during this development.

– jeremy

35mm Photos are to Digital Photos as Paper Maps are to GIS

Thursday, February 9th, 2012

I agree with sah. I’m excited about geovisualization! It is truly amazing how maps have become a dynamic user interface! Even when I first started studying Geography several years ago, maps on paper were almost obsolete. On some levels I want to feel nostalgic, as I do for the era of film cameras, but ultimately GIS is far more practical. In his 1965 article titled New Tools for Planning, Britton Harris writes that “so long as the generation and spelling out of plans remain[s] an arduous and slow process, opportunities to compare alternative plans [are] extremely limited” (Harris 1965). Geovisualization and electronic, dynamic databases allow us to be more creative with existing information.

The MacEachren and Kraak article seems to stress the importance of having a universal map that serves many different fields at the same time (like cyberinfrastructure inferred, this hints at the future and the web 3.0, where the machines are doing a lot of the work on their own, catering to the needs of the user without being prompted). This is where I will raise an issue. I agree that it would be nice to have one map to serve multi-disciplinary studies, but at the end of the day, a tool optimized for a specific field will always do a better, more thorough job than a universal tool. For example, the cross-training running shoe is a good shoe for many different exercises. It allows you to have support in many different directions and is a great shoe for the gym, but you don’t see many basketball players wearing cross-trainers. Furthermore you would never consider wearing a soccer cleat on a gym floor. Don’t get me wrong, a cross-trainer is great, but if you want to get the most out of a shoe, you may want to try a shoe that is sport-specific.

Gone are the days of the 35mm film, quality photos and photo albums;  we’re left with millions of self portraited digital Facebook photos… Quality is rare but the options are now limitless, just like the world of GIS and geovisualization.

Andrew GIS

 

Where is the validation?

Thursday, February 9th, 2012

My main qualm concerning geovisualisation is the insane amounts of data that is popping up on the Internet daily, and how people are trying to go about making any sense of it and using it for research (in academia, for use in constructing political policies, generating public knowledge, etc.). Data is gaining increases in complexity and heterogeneity simultaneously as new uses are being found for this data. Kraak and MacEachren outline that geospatial data resources are being used to create visualization tools that enable understanding and recreate knowledge. From my understanding of the article, not many measures are being enacted to ensure the validity of the data and subsequent knowledge it creates. But are they even necessary?

Particularly following the problems of semantic differences in data across users as well as the presence of collaborative sources, data seems to have inherent problems with translatability when it comes to interfaces trying to support individual differences. People view things different ways and at varying scales, and in the realm of geovisualisation where the social is becoming increasingly prominent, how do we account for the differences seen and deem what is “correct”—how can we say what is valid information and what isn’t?

I suppose the answer lies in the problem. With an increasing number of users creating data there is also an increasing number of users checking the data. Interactivity and collaboration allows people to change data—a sort of built-in member checking. Ensuring validity is as great of a responsibility as generating geospatial data in the first place.

Further thoughts: As user generated data is checked by other users, does this infer that the data used to produce knowledge will reflect some sort of regression towards the mean if outliers are eliminated? In a social aspect, will geovisualisation just show the averages in spatial perception?

-sidewalk ballet

What About Privacy in Data?

Thursday, February 9th, 2012

Sarah Elwood posits that rapid change took hold of geospatial technologies over the last five years, with the “emergence of a wide array of new technologies that enable an ever-expanding range of individuals and social groups to create and disseminate maps and spatial data” (256). Elwood does an admirable job of fielding some of the pros and cons that stem from this revolution in technology. In particular, she covers changing power relationships as new groups are empowered by creating data, the possible limitations of existing spatial data models and analytical operations, and how problems with the heterogeneity of the data might make it difficult to support across users or platforms (interoperability).

However, her most important alarm bell, I believe, comes when she writes “that the growing ubiquity of geo-enabled devices and the ‘crowd sourcing’ of spatial information supported by Google Maps fuels exponential growth in digital data, and growing availability of data about everyday phenomena that have never been available digitally, nor from so many peoples and places” (257). What happens when governments use this data to spy on citizens or when individuals use this data for the wrong purposes? The United States government clearly has no compunction about monitoring its own citizens (if you follow recent politics there). Elwood, herself, pays short shrift to what this might mean for the privacy of users and, even, just the public caught up in “everyday phenomena.” She notes that some scholars have raised the question of whether or not the rise of these technologies constitute new forms of “surveillance, exclusion and erosion of privacy” (257) but quickly moves on to the exciting promise of these technologies.

In particular, Elwood appears enamored of the potential of these technologies to reveal new social and political truths (261). Yet, as we noted in our IPhone conversation in class, these technologies might be used inappropriately to track us without our knowledge. Individuals in a democratic society have an undeniable right to privacy, but how can they use these new technologies and software and still be sure that their privacy is respected and their data remains anonymous (if needed)? Should some type of system or regulations be put in place to ensure this right? Something like this has been tried in Europe, but what are the lessons? I’m not sure.

–ClimateNYC

The Challenge of Large-Scale Data and Geovisualization

Thursday, February 9th, 2012

Nowadays, geospatial data are collected in unprecedented speed, and data volume also increases exponentially. We get image data with fine spectral and spatial resolution from remote sensing technologies, volunteer geospatial information from GeoWeb and mobile technologies, and historical records from different geospatial databases. Due to those factors, geospatial research is now facing of large-scale data, and how to extract information from the large-scale data for knowledge discovery becomes an important challenge for Geovisualization, as MacEarchren et al. point out in 2000.

Previously, Geovisualization has a tight relationship with Cartography, since it is often utilized to visualize geospatial data in 2D format and provide similar functionalities as maps. But the advancement of technologies, especially Web2.0, has re-formatted Geovisualization as a portal for geospatial information sharing and exchange. With the increasing large-scale data (here large scale means both large volume and high dimension), data mining and pattern recognition are necessary techniques to extract useful information for users. As Web 2.0 brings user-centric computation, how to update knowledge and visualize it with new data turns out to be an interesting topic.

The challenges are concluded as representation, visualization-computation integration, interfaces, and cognitive issues in the paper of MacEarchren et al.. Large-scale data is a common factor in the four types of challenges. Meanwhile, Web 3.0 is approaching, which transforms Internet into a large data source. As computing platform becomes diverse (cloud computing, mobile equipment, and so on), knowledge discovery process is also extended to distributed computing environment. Thus, Geovisualization should also keep pace with this change.

–cyberinfrastructure

Geovisualization, how exciting!

Wednesday, February 8th, 2012

This article made me really excited.  I love that it emphasizes the evolution of maps.  Now, when I am asked (as a geography student) if I make maps, I can say, “YES!”, knowing that means so much more than simply (or sometimes as we all know, not so simply!) drawing lines on a map, and actually creating a dynamic database that reflects an accumulation of spatial and non-spatial data.  The idea of maps becoming so much more than a method of visualization, but methods of visualization AND data storage, representation, data manipulation, etc is incredibly fascinating.

Despite the fact that it was entitled “Research Challenges in Geovisualization”, I managed to overlook the “challenging” aspect, and really focus on the amazing potential of geovisualization.  It’s true that there are a lot of challenges–but each challenge merely brought about excitement for the prospect of these challenges being overcome and the full potential of geovisualization being realized.

If you Google “Digital Earth” you get many various hits, but one in particular that I thought impressively captured an aspect of the integration possibilities with geovisualization is here: http://www.ted.com/talks/lang/en/blaise_aguera.html.  This video is a TED Talk discussing Bing Maps and Digital Earths.  Obviously, there are many problems and questions that must be asked of technology such as this (as MacEachren and Kraak so thoroughly pointed out in their article), but the implications are nevertheless fantastic!

On a more technical note, I think the suggestions presented by MacEachren and Kraak were very interesting, and the emphasis on the interdisciplinary requirements of a task such as this was well noted.  The nature of geovisualization seems to require interdisciplinary work, as it is the integration of many areas of expertise, and data in many forms.  All in all, I am excited to see what the future brings for this rapidly emerging field.

sah

MacEachren, Alan M, and Menno-Jan Kraak. “Research Challenges in Geovisualization.” Cartography and Geographic Information Science. 28.1 (2001): 3-12. Print.

Geo-visualization: recalling ontologies & considering metadata

Wednesday, February 8th, 2012

Geo-visualization seems to present an endless number of opportunities, for both public and private groups and individuals, to partake in data collection, distribution, and analysis.  The issue of metadata seems to be prevalent here, and recalled the discussion on ontologies of last week.  How do we process this immense amount of incoming data when there is not a shared understanding of what it actually is, and how it is being described?  Elwood stressed this need for shared understanding, and I agree that users must be wary when working with this digital spatial data–it is dynamic, heterogeneous, and user-generated.  And not that this is a bad thing, but rather, it just means that the initial intention of the creator may not be as evident as data collected by the USGS, for example, as a way to clarify how the data is being qualified.  So the desire to create ontologies is understandable.  For example, Elwood describes the example of someone who labels an image “close to X location”, and suggests that this “close too” can cause problems.  How do we integrate this qualifications of location that make sense to humans, but not to the traditional mathematics GIS operates with currently?  In my opinion, this is the largest obstacle to overcome.

What the Elwood article also highlighted for me is that there is a huge onus on the public here, and much of this data should come with a big disclaimer.  It seems that this is a technology advancing at a pace much faster than the ability to properly create and cite metadata, and that it is not necessarily being misused, but perhaps more accurately, misinterpreted.  Although, Elwood also mentioned that there did seem to be a blatant misuse in some instances, which means that users must be even more aware when using and interpreting this data, because a mistake may not be honest, but rather intended to misdirect the user.

All that being said, the usefulness of geo-visualization technologies is undeniable, and this is an exciting and interesting field.  As long as there is constant questioning and continued research into the ability to integrate this data into more traditional, established iterations of “GIS”, as Elwood mentions, it can continue to expand in both scope (of content, and possible uses and users) as well as reliability.

sah

Elwood, S. 2009: Geographic Information Science: new geovisualization technologies — emerging questions and linkages with GIScience research. Progress in Human Geography 33(2), 256-263.

How can we make sense of all this data?

Wednesday, February 8th, 2012

Part of Elwood’s paper considers the implications of using data provided from different users. Data providers stemming from different backgrounds and cultures approach information, its synthesis, and its portrayal in varying ways. This heterogeneous data is further transformed through the manipulations required to make any sense of it. Elwood notes, “data are dynamic, modified through individual and institutional interactions and practices” (259). How can we ensure that the meaning instilled by the original user is carried through all kinds of manipulations and transformations, especially when primarily deciphering the original meaning proves to be laden with complexities?

Elwood provides an overview of many solutions to grapple with a wide array of geovisualisation challenges, but I think we might be getting a little ahead of ourselves. Surely there are a vast number of challenges to be addressed (as seen also in the MacEachren and Kraak article), but can we do it all at the same time? Making sense of original user data seems to be of primary importance before we can assess how it changes through practice and collaboration. While initially seeming counterintuitive to user friendliness, approaches like “standardiz[ing] terms across multiple sources” (258) and using formal ontologies may prove necessary in trying to etch out semantic differences in user provided data.

How can we work collaboratively if we’re talking about different things? We can trace the “modification of concepts in a spatial database as they are used in the process of collaboration” (260), but what do these concepts mean? Can we actually standardize open, user-generated geospatial data in order for it to be interoperable? With the increasing amounts of data sources and data heterogeneity, it looks like there is a long, winding road ahead of us.

Elwood, S. 2009: Geographic Information Science: new geovisualization technologies — emerging questions and linkages with GIScience research. Progress in Human Geography 33(2), 256-263.

-sidewalk ballet

 

Elwood and Social approaches to data management/visualization

Monday, February 6th, 2012

Elwood’s piece offers an overview of the issues in sorting geographic data. Following an explosion of available geographic data due to geo-tags, GPS units, and volunteered geographic information (VGI), she focuses on the challenge of sorting the data. The Web 2.0 has significantly contributed to this proliferation of data by making user-produced products much easier and more accessible. Elwood raises 3 stumbling blocks in massive data heterogeneity, how to represent qualitative spatial data, and keeping up to date with dynamic data over time.

            This article is useful in demonstrating that “visualization” is not only what is displayed, but also the conscious design behind the collection and organization of the data. The most captivating idea to me involved the context-dependent integration of data, where semantics are accorded nearly a field themselves. Here we find the intersection of the utility of a natural language ontology with data exploration as a subset of geovisualization. Contributors of geographic data are encouraged to work out how their data relates to a broader context/dataset, rather than being forced to think like computers and apply tags or join by attributes to attract the most set of eyes. This seems to be an example of an ideal structural philosophy that affects the public’s attitude and cognition of geospatial data. At the very least, users will be inclined to partially realize the spatial component of their data and its interconnectedness with larger processes. This represents a social approach (and not a technical one) towards data management. Perhaps we can call it the invisible hand of geography?

-Madskiier_JWong

MacEachren and Kraak and Simple Visualizations

Monday, February 6th, 2012

 

            MacEachren and Kraak explain the importance of geovisualization as a way to merge human vision with domain expertise. Broad applicability in fields such as medical imaging awaits pending the solving of major issues in representation, integration, interface, and cognitive/usability issues. The authors round up their paper by pushing for practical solutions to increase research done on geovisualization.

            I would like to point out that improvements in geovisualization need not necessitate more realistic models. I undertook extensive fieldwork and research to present:

This is a screenshot from the simulation game Dwarf Fortress, whose graphics are entirely based on ASCII. The green triangles represent slope (upwards-pointing triangles represent an uphill slope, downward ones indicate a valley), while different elevation levels are conceived in stacked layer format which can be viewed at the press of a key. Depending on my purposes, this simple representation may be enough to inform my decision of uneven terrain ideal for defending my dwarves (don’t need exact elevation values). The graphics are certainly sufficient for representing how individuals interact and gather resources from the environment (e.g. shortest distance calculations by finding the nearest firewood). A bit contrived I know, but the argument holds for situations such as the Battle of the Boids Agent Based Model shown in class where ‘boids’ were simple triangles, yet were able to show movement patterns. I was also challenged in my raster GIS class where given a DEM of say, Mont Royal, what value would animating it in more realistic 3D have from a purely analytical perspective. I’d like to open this question to other readers (I only came up with being able to debug poor stitch jobs and mismatched elevations with other DEMs at the seams). I concede however that when exploring massive datasets with an abductive approach (no hypothesis in mind), realistic visualizations may offer more creative stimulation to the user.     

MacEachren and Kraak briefly touch on this point by noting a tension between realistic and abstract representations, saying some believe “abstraction is essential for achieving insight”. I feel that the reasons for abstract models tend to be more for practical reasons of limited time and resources than a belief that abstract models are more objective and thus insightful.

-Madskiier_JWong

Practicality in, reality out? Sort of

Friday, February 3rd, 2012

Kuhn’s style in addressing ontologies differed from that of Smith and Mark’s. His article is more comprehensible, as it has more focus and attempts to cover less ground. However, I did find the articles to successfully complement one another. The main scope of Kuhn’s article, focuses on “problem-solving world knowledge” (with an emphasis on operations and domain theories), rather than “problem solving methods or reasoning”, is a step in the right direction (616). If ontologies will be diversified, inquiring about knowledge similarities and differences in various fields is appropriate. The step-by-step explanation given through the German traffic code text analysis was useful to organize the (at times) overwhelming and meticulous aspects of ontologies. Kuhn was critical and elaborate when discussing the limitations involved in textual language processes and future challenges of ways ontologies will be utilized in geographical space.

He argued for the representation of reality in geographical information to be prioritised less than what we do with that information. More specifically, how it is practical and what the user needs are. Even though I agree with the article, that practicality is a key factor in the development of textual ground, reality represented in geographic space should not be completely ignored. This is due to the lack of clarity to support the notion of the inability of ontologies be task-dependent. Hence, Chandarasekaran’s (1998) statement, “what kinds of things actually exist should not depend on what we want to do with that knowledge”. However, the various characteristics of reality of a domain which belong to a specific ontology (through identification and the written form), depends on the particular tasks the ontology is being built for (Chandarasekaran’s, 1998). Kuhn finds this to be critical to what can be achieved in practice. I believe a combination between practicality and reality would be most effective as the two are both substantial to ontological use in the geographic realm.

-henry miller

Ontologies: abstraction, imagination, existence

Friday, February 3rd, 2012

Being new to the field of ontology, I took a deep breath before starting to read what I automatically thought would be an obscure, existential article titled “Do mountains exist?” To my relief, it was much more than that. As a hiker, I first thought about my personal connection and idea behind mountains. Do mountains exist? Do I believe mountains exists? All of this is somewhat vague, leaving much room for interpretation; a question that will undoubtedly be answered with many, many other questions. Does this matter? Do all humans believe they exist? Or maybe just some? What is the construction of meaning behind determining their existence?

Arguably, this is a challenging field, and I believe Smith and Mark provided a helpful, in-depth explanation on the different dimensions and perspectives of ontology (focused on human thought and action). At the same time, the authors acknowledged their limitations as all concepts/issues pertaining to this topic could not possibly be addressed at length in the article. This was carried out by outlining the dichotomies of primary, and secondary theories; the former is grounded on an analytical approach, incomplete due to limitations in explanations, assuming common knowledge. The latter is comprised of folk beliefs, developed at different levels, with much diversity. This, in turn, is dependent on a specific culture or community, deeming secondary theory to be inconsistent.

I did find it interesting that a focus was made on primary theory, and the way it can be integrated with the “realm of science” (10) since it is the theory of the geographic domain (9). What happened to secondary theory? This makes me think of Ally_Nash’s comment of primary theory being objective and secondary theory being subjective. Is that what the authors thought as well and that is why the focus in the article is on primary theory? The authors attempt to merge philosophical and information systems approaches within a single framework (6), where “a complete ontology of the geospatial world would need to comprehend not only the common-sense world of primary theory but also the field-based ontologies that are used to model runoff and erosion” (18). Thus, I argue that due to the challenges behind this integration, primary theory is not objective. Furthermore, “maps do not represent mountains directly as objects with crisp boundaries” (12), where abstraction plays a critical role in our conceptualization of them. The similarities between Mount Everest and the Santa Barbara neighbourhood create a paradox that Smith and Mark only half solved, as both (mountain and neighbourhood) are “a product of socially established beliefs and habits” (14).

Although there is much work to be done, I admire the authors’ ambitious plan to find an ontological framework that can unify the perspectives of a vast number of fields to create a complete ontology of the geospatial world. Why not use abstraction and imagination to unite instead of divide these fields.

-henry miller

Do mountains exist?

Friday, February 3rd, 2012

I agree with sah about this article particularly with respect to the need to have task specific ontologies rather than a specific universal ontology of landforms in many cases. Those who study a mountain or require precise definitions of what a mountain is would require an ontology of landforms although they may be the only ones to use such an ontology. In class, it was mentioned that keeping spatial uncertainties present in the data was often very important in representing different views on intangible concepts such as disputed country boundaries. This same thinking can apply in terms of ontologies as well.

 

An ontology, to me, seems like a dictionary of the spatial meaning associated with a particular word. In this sense, and perhaps I have misinterpreted what an ontology is exactly, an ontology could have multiple definitions of a particular word and the user could select the correct definition for their purposes from the ontology. I compare this to different types of citations available on citation manager software. There are many different ways of representing crucial citation information and the user need only select the one they require. Why could this not apply an ontology of landforms?

 

-Outdoor Addict

 

Do Rivers Exist? River line segments and Land in GIS versus Native Ontologies

Friday, February 3rd, 2012

GIS is known to have its pre-defined categories and way of being in the world. In fact, GIS has its own ontology, or even ontologies (as we can’t even decide if it is a tool or a science, we must have different ways of thinking about GIS). The question that must be asked is when is a parcel of land part of a river network, and how can this be represented in GIS today?

Smith and Mark outline in their paper that ways of thinking about mountains differ from culture to culture and language to language, this is also true when thinking of GIS.  GIS has a certain way of thinking about real-world phenomena that may differ from aboriginal perception of the same phenomena. Sieber once mentioned that in GIS, a river network is constrained to the line segment that represents it, but for aboriginals, the river might also continue as part of the land over which they portage their canoe.

The same can be said for other geographic phenomena, such as mountains, as in Smith and Mark’s article. Mountains are represented in reality as continuous landforms indicated by a steep elevation gradient, though the commons will identify a single mountain as an object, and even offer it a name.  Aboriginal peoples may attribute spiritual value to a mountain, or be the basis of their world view, as quoted in Smith and Mark. However, in GIS a mountain can be represented as a gridded digital elevation model, a point or a polygon.

Ontologies represent reality for a certain group, and also relates to GIS as a field.

 

-rsmithlal

Do we really need formal ontologies

Friday, February 3rd, 2012

The first thing I noticed was that Smith & Mark start with a much more philosophical definition of ontology, as being focused on describing “the constituents of reality…in a systematic way”, as opposed to Kuhn’s definition diving straight into specification of conceptualisations through language which bypasses the question of existence. It was interesting to see the two approaches – one from the domain based ontology and the other from the more holistic approach.

Smith and Mark provide a good overview of ontology, especially primary and secondary theory and the separation of the two. However, their actual suggestions on the future of geospatial ontology is quite scarce, apart from stressing the need for an all encompassing ontology that is general enough to be used in any scenario, but also able to be tweaked as well.

Kuhn on the other hand goes through the interesting process of creating an ontology, and puts more detail into concepts such as affordances. The methodology he goes through is interesting, but still very much dependent upon a textual source. The choice of that source is absolutely crucial – choosing a text in a certain language is probably already resulting in a loss in ‘resolution’ (if that term may be appropriate here), but it is after all, a domain specific ontology, in which case – why translate it to English in the first place (I must note at this point that I am definitely not an expert in the field of ontology).

What I would like to question though is whether or not having separate ontologies is necessarily a big problem. The ontologies used everyday I think are very much a cultural phenomenon in such that they are and should be flexible and malleable according to what humans do and the scale at which we are able to perceive things. In trying to create a formal ontology (an ontology that is unbiased and constant, independent of content), one is probably (as madskiier suggests) limiting the ability to express oneself. The nature of the world is dynamic and human knowledge increasing, so perhaps it is the nature of ontologies to grow, rather than be static. I do agree however that issues of translation and cataloguing are very reliant on ontology, but having separate domain based ontologies should still be the way to go, in order to preserve as much detail as possible.

Finally – what would a formal geographic ontology do for the imagination and communication? It may make the world a slightly more boring place

 

-Peck

 

*yes. cold desert environment.

Re-think Mountains in GIS with Ontology

Friday, February 3rd, 2012

In GIS, mountains exist as a number of 0s and 1s. They may be stored in the hard disk as vectors, matrices, or even single values. By visualization, we extract those 0s and 1s from the storage, display them according to the user requirements, and label them as “mountains”. By this means, we admit that mountains exist physically in GIS research. But with ontology, which studies being or existence itself, it is quite hard to define what exactly a mountain is. By taking a look at the theories in geomorphology or hydrology, it is nearly impossible to find the starting and end of a mountain, and we can even challenge whether “mountain” is an appropriate name to describe the altitude of certain locations. But with information systems, ontology does not mainly deal with existence, but formalize the concepts under established logics or theories. To be more specific, in GIS, ontology helps us to clarify spatial information.

Let us get back to the “mountain” example in GIS. We need to give labels to most “mountains” a label for identification, such as the “Mont-Royal” on Google Maps. But is this label correct? What happened if we label it as “McGill Mountain” in another GIS? I think if we label it as “McGill Mountain”, someone can still recognize that mountain, at least most McGill students. But with ontology, we can easily figure out that “McGill Mountain” is equal to “Mont-Royal”, as they have the same feature in GIS.

One very interesting argument in the paper of Smith et al. 2003 is that they view environment modeling as field-based rather than object-based. But without objects, it is difficult to model filed itself. However, with ontology, the notion of “field” may be easier to conceptualize. But here comes the question: Does ontology differentiate with respect to the complexity of concepts?

–cyberinfrastructure

Who does the grounding?

Friday, February 3rd, 2012

The concept most fascinating to me in Kuhn`s paper was that of grounding which explained that the “claims of any domain theory need to be based on some observation in that domain”. This makes intuitive sense; one must know the domain in question and have some reference material to be able to begin to assemble the domain theory and an accompanying ontology. However, Kuhn refers to the “observation” as being “tangible” which may not always be the case. In Kuhn’s methodology, the basis for information is from a textual source compiled by experts. This assumes experts have the most appropriate grasp of how to assemble the concepts of a domain into some textual reference and then ontology. Yet this neglects two things: public participation and non-tangible reference material.

Public participation would be crucial in cultural studies where experts may not actually be of the culture they study and so may miss some cultural distinctions as a result of their own cultural influences. This reflects a need for the public to participate in defining concepts and objects as they see them and not only as these things are seen by experts or, as Kuhn mentioned, by the knowledge engineers often creating the ontologies.

Closely related to public participation are non-tangible references for a subject such as oral histories of some cultures. The history or stories told may remain relatively stable over time yet the words used to tell a particular history or story may change from telling to telling making it difficult to pin down exactly the meaning of a single concept in the story and to place this in an ontology.

-Outdoor Addict

A Better Ontology: Ontology design through domain specialization

Friday, February 3rd, 2012

In his article, Kuhn states that ontology design is performed by knowledge engineers that are not specialized in the domains they are designing ontologies for (Kuhn, 619; emphasis added).  Furthermore, he states that ontology design is carried out in consultation with domain experts through informal interviews, as well as incorporation of documents detailing the ontology requirements and existing databases on the subject of the domain(619).

I feel that these knowledge engineers would be able to create a more in depth and meaningful ontology if they were to actually undertake the grueling and likely expensive task of specializing in the domain that they with to create an new ontology for.  Borrowing from the field of Anthropology, you could think of this concept as a type of Ontological Fieldwork.  The premise of fieldwork is to be able to learn the subtle nuances of a culture, or in this case – a domain, through integration and participation in the life and events of a particular culture, or in our example, domain.  I view this current generation of Knowledge engineers as the equivalent of Armchair Anthropologists, those is in the pre-Boasian era of anthropology would study, and subsequently define, a culture based on explorer, missionary or colonial reports.  This of course lead to a the prevalence of ideas about cultures that were sometimes very far from reality.

In order for the effective and meaningful ontology design, I propose the following five steps to be undertaken by Knowledge Engineers before attempting to define a new ontology for a domain, over a period of at least a year.

  1. Designers should identify and chose a mentor from one of the established researchers in the designers domain of interest.
  2. Designers should study their mentor’s work, and subsequently explore the work of other established and up-and-coming researchers in the domain of interest.
  3. Designers should attend or organize conferences or round tables designed to bring to light a collective picture of what the particularities of the domain of interest.
  4. Designers should synthesize their findings and prepare a report, or ethnography, of the domain of interest.
  5. Designers should commence work on designing an ontology to represent the domain of interest, taking into consideration nuances of the domain and other findings discovered during domain fieldwork.

In conclusion, I feel that ontology design would benefit from an added level of familiarity with the domain of interest. This would aid in illuminating meaningful nuances that may otherwise be overlooked when researching a domain using conventional, non-committal methods.

– rsmithlal

Developing New Geospatial Cyberinfrastructure with Ontology

Thursday, February 2nd, 2012

Nowadays, geospatial information can be collected with unprecedented speed from multiple sources, including a large body of geosensing systems, historical records, online GIS databases, and so on. On the other hand, user requests for the geospatial information are rapidly growing and the requests always involve distributed heterogeneous data processing. By distributed we mean data are stored or available at different servers, and by heterogeneous we mean data are kept with different format, and both features present great challenges in GIS research. As Kuhn et al. mentioned in their paper in 2001, most traditional geospatial information systems have concentrated on map contents rather than the actual user requirement, which leaves a gap between geospatial cyberinfrastructure and user needs.

Ontology has been proposed to help geospatial information extraction and sharing from the mentioned sources by Kuhn in 2001. The author suggests developing user-oriented GIS instead of map based systems, and using the notion of affordance to establish a hierarchical model of human activities. And their theories have been implemented with the German traffic code project, which has proven the success of utilizing ontology to build the new generation of geospatial cyberinfrastructure.

In 2010, Sieber et al. have built another ontology based geospatial cyberinfrastructure, which incorporates the China Biographical database, the McGill-Harvard-Yenching Library Ming Qing Women’s writing database and China Historical Geographical Information System. This geospatial cyberinfrastructure uses ontology to provide synthesized information about Chinese Women writers in Ming and Qing dynasty, their kinship, publication, and social communities’ information. Utilizing ontology in the design of geospatial cyberinfrastructure, we can enjoy the improvement in spatial knowledge access, discovery and sharing.

 

–cyberinfrastructure