Archive for the ‘General’ Category

Spatial Scale Problems and Geostatistical Solutions

Thursday, February 14th, 2013

Atkinson and Tate make a good point. I only wish I could find it. Their extensive use of mathematics is daunting, but a necessary evil when understanding what goes on under the hood of ArcGIS. With no personal experience in the matter, a quick Google search yielded that Variograms are the same, if not similar, to kriging, and require significant input from the user. Correct me if I’m wrong.

GIScience has managed to produce a slew of tools that produce right answers. That is to say, there is only one possible answer. The more complex processes, like the interpolation methods outlined by Atkinson and Tate, reveal that there sometimes must be a best answer. At which point it is the responsibility of the user to justify their reasoning behind choosing 10 lags instead of 5. At which point, it becomes a case specific example.

What makes me curious is, is there a right answer? Is it possible to create a set of parameters, possibly for an arbitrary set of scales, that would optimize the up-scaling and kriging process in all fields of use?
Written in 2000, there has been more than a decade for someone to answer the question and implement it in GIScience. As of 2013, there is no right answer, but there is a significant amount of mathematics to back it up.

In an ideal world, if the research field dedicated data mining and geographic knowledge discovery is successful, there may eventually be no need for interpolation as it is replaced by the overwhelming wave of high resolution, universal, data sets.

AMac

Wednesday, February 13th, 2013

In working on my final project, I picked up a copy of “How to Lie With Maps” by Mark Monmonier at the library. I haven’t gotten too far into the book, but its concept, of the way that maps are always more complex than they look on the outside, provides a useful starting point for the discussion of scale. The article by Atkinson and Tate on scale provides an overview of some of the problems that scale brings up in our work, and proposes some ways that we may work with or around this problem. The question I would like to pose, (as it seems that on the technical/data collection side no large changes will help us solve the issue of variable scale any time soon), is how we may be accountable in our GIS work, specifically at a representational level, to problems of scaling.

To someone untrained in GIS, or unaccustomed to critical reading, a map is just a map, an abstraction of reality. For this type of viewer (and not only of maps, but I use this example because it is the most simple), how can we be transparent about what the image lacks or what data the image obscures? It is easy to lie with maps and it is easy to choose an aggregation that is advantageous to those invested in the project, but it is not so easy to make this clear to the uninformed viewer. So I ask, as I always do: Is being accountable to issues of scale in GIS possible? Is it desirable (and if so, when) ?

Wyatt

How to Transfer the Only Answer

Thursday, February 7th, 2013

“The primary purpose…is to define a common vocabulary that will allow inter-operability and minimize any problems with data integration.” Maybe I am misinterpreting the statement, in which case it would be beneficial to have an ontology for papers on ontology. From what I gather, ontology strives to describe data in a standardized, easily translatable manner. Would that not require culling the outlying definitions, or creating an entirely new definition to categorize. In which case, do we not lose the small nuances and differences? Why are those not as valuable as the opportunity to integrate?

This runs head long as a counter argument to the pro-integration sentiment in Academic Autocorrelation. It is the differences that GIS benefits from. Given our current methods of capturing data, and the sheer scale on which projects are now attempted, it is unlikely that one will ever capture the truth. Rather, it is a representation of the truth from the instant we perceive it. Our interface with our environment consists of no more than five senses, which when compared with other species are rudimentary at best. Furthermore, it is is surprisingly easy to replace reality with something that is not, in which case, though, it still is, according to the viewer, their reality. Thus, the broad range of subjectivity in interpretation is a beneficial burden.

If an ontology were to be imposed on our knowledge set it would constrain our perception, as limited as it is, and yet facilitate transfer across parties. If truth is sacrificed in favor of knowledge transfer, it is the responsibility of the individual to balance accordingly. Unless I am lost myself, in which case I look forward to further clarification.

AMac

 

Academic Autocorrelation

Thursday, February 7th, 2013

Nelson talks of the future challenges the incoming generation of spatial statisticians and analysts will face. One, in particular, is the dilution of geography’s influence over the trajectory of the field of spatial analysis. According to a survey of some 24 respondents, there is a risk of “training issues” if “spatial sciences are adopted by many groups and lack a core rooted in geography.” This is a very isolationist way of thinking. If a field is to be dominated entirely by one group of common thinking individuals, it is bound to hit a dead end.

A nondescript, military-in-mind, ramshackle structure was constructed in Cambridge, Massachusetts during World War II. Its purpose was to develop and perfect radar, an instrument that was instrumental in the war effort. Like all wars, once it was over, the building had served its purpose and was intended to be demolished. Tight for space, Massachusetts Institute of Technology crammed a hodgepodge of disciplines into the structure. Before it’s demolition, 50 years later, it had come to be known as the “Magic Incubator.” Numerous technological advances stemmed from the building, many of which could have been accomplished without work across multiple, previously, unrelated disciplines.

Spatial analysis can gain from the weakening of geography’s grip on the subject, allowing different minds with different problems to use and adapt the tool as needed. Until then, spatial analysis will be on the path to innovation, with little invention branching off.

AMac

Unraveling the mystery of ontology

Thursday, February 7th, 2013

Ontologies are such an interesting and abstract field to me. In the lab I work in, there are many people who develop ontologies (some spatial, some not) and I always struggle to comprehend what they are or how I could ever explain them to someone. They seem to be classification systems or ways of understanding trends in different types of data. For example, one project is based on looking through blog posts on vaccines and classifying the content as “pro-vaccine”, “anti-vaccine” or neutral. The idea of how you would do this was completely abstract to me before I read this paper, now I can see how it fits into some of these concepts. The ‘sentiment’ of the blog is similar to a “secondary theory” relating to the content. As with many analytical models, ontologies to capture spatial trends are more complicated than their aspatial counterparts, but also raises a whole new set of interesting challenges (e.g., issues of scale). Looking forward to the presentation tomorrow and maybe finally  really understanding ontologies!

-Kathryn

[PS My spell-check in firefox seems to thing “ontologies’ isn’t a word and wants to change it to gerontologist…]

Different people, different ontologies

Thursday, February 7th, 2013

There is no one formal ontology for GIScience purposes. Agarwal notes Uschold and Gruninger (1996)’s four types of ontologies: ‘highly informal’, ‘semi-formal’, ‘formal’, and ‘rigorously formal’. Agarwal continues to outline other academics’ categories of ontologies, which can be loosely fit into the aforementioned four types. Most interesting to me are the ‘highly informal’ ontologies, which can comprise general or common ontologies and linguistic ontologies. How can these ontologies be incorporated into GIScience and into a GISystem? Do they need to be translated into a more formal or meta-ontology in order to be properly analysed, reproduced, and/or applied broadly across different applications? These are questions I don’t have answers for.

Agarwal acknowledges the lack of semantics in the ontological specifications. He notes that “explicit stating and consideration fo semantics allows better merging and sharing of ontologies” (p. 508)– perhaps it is from here, in the recognition of varying semantics across cultures and people, where we can move from informal to formal ontologies. Concepts can therefore be qualified with a criteria stemming from the merging and sharing of ontologies, and consequently increase our understanding and better our analyses.

-sidewalkballet

Spatial stats within geography

Thursday, February 7th, 2013

Nelson’s article gives a thorough overview of spatial statistics through a synthesis of literature and surveying of professionals in the field. The article is well structured as Nelson walks the reader through the different sections. From this article, spatial statistics can be linked the past topics that we’ve studied in GIScience, such as the importance of good user-centred GUIs and wide distribution of applications on the web and data visualisation.

Nelson has a subsection entitled “Geography as the Home for Spatial Analysis” where she situates spatial analysis within geography. She comments on the trend of certain subjects migrating into different disciplines (or forging their own) as geographers give up leadership. If the growing fields within geography leave the discipline, what do we have left? Are geographers equipped to meet the demands of the growing fields? Nelson continues on to acknowledge how geographers are not trained to think mathmetically, statistically, nor computationally — strains of thought which are required for spatial stats. She raises questions on to what extent spatial stats should be involved in geography’s curriculums. I think McGill does a good job with our two required stats courses, but I would like to see more application of statistical methods in other courses.

Spatial statistical analysis needs geographers — maybe not to perform the analysis, but for spatial interpretation. Geographers need spatial analysis to increase rigour in our studies and validity as a department.

Interestingly, this blog post calls for geographers wanting to become spatial statisticans to round out their education with a math or stats degree. It takes more than just geography.

-sidewalkballet

PS: For future thought– Nelson says, “data are increasingly being viewed as public properties” (p. 86)… hmmm…

Exploratory and Confirmatory Spatial Analysis has come a long way, but…

Thursday, February 7th, 2013

As with many of the papers in this class, the topics presented are still extremely relevant to the field of GIS, however we have made leaps and bounds in terms of technology since it was written (1992 in this case).  Computing power and the development of appropriate algorithms have allowed GIS analysts to drastically improve the so called manipulation, exploration and confirmation processes brought forth by Anselin and Getis.  While I have only been familiar with a program such as ArcGIS for a few years, I would argue that the spatial analysis capabilities have drastically improved since the 90s.  It is obvious that GIS is no longer about the display and visualization of spatial data, as the ability to perform exploratory and confirmatory analysis has become the norm.  These sorts of procedures have become more “automated”, per se, and allow for more “plug it and chug it” methods to spatial analysis.

That being said, the authors bring up a vital point by saying that in some cases, “better theoretical notions may be needed.”  To me, this is essentially a warning to GIS analysts, telling us not to rely solely on whatever new algorithms or spatial analyst tools may be needed.  When one is working with the massive complexity of spatial data that is at our fingertips today, it is imperative that we are familiar with the data itself.  We must still predict what sorts of patterns we may see.  If the exploratory process unveils some sort of new model of our environment, we need to know why that is so.  Otherwise, we reach a point where the user is no longer relevant, which will be detrimental.  So, yes, we have made great progress in the use of spatial geostatistics.  However, we must be careful how far we take this and always be conscious of the types of decisions we make when analyzing spatial data.

 

-Geogman15

Do Mountains Exist?

Thursday, February 7th, 2013

The deep question with which the paper starts delves into the definitions of existence and comprehension of geographic features around us. The coming of predicate logic was the first attempt to consolidate questions about existence in a scientific framework, thus binding existence to a variable. However, to answer questions about categories and objects, predicate logic faces a challenge as these definitions are by nature recursive. As rightly pointed out by Barry Smith and David M. Mark the question then becomes two folds: “do token entities of a given kind or category K exist?”  and “does the kind or category K itself exist? ”. Predicate logic in itself is good at explaining logical entailment but fails to take into account the how humans perceive things. Thus, it may be right to say that mountains exist as they are part of the perceived environment.

Information Systems on the other hand adopted a different definition of ontologies. It considers ontologies as a set of syntax and semantics to unambiguously describe concepts in a domain. The objects are hence classified by information Systems in to categories and the categories are in turn arranged into a hierarchical structure. However, such an arrangement was futile in describing things like mountains, soils or phenomenon such as gravity. One central goal of ontological regimentation is the resolution of the incompatibilities which result in such circumstances. Hence the concept of fields was developed to efficiently categorize these “things”.

However, there are still doubts with naming of such “things” like mountains. Obviously, Mt. Everest exists because all the particles making Mt. Everest exist but exactly what particles are called Mt. Everest. This is the inherent problem in dealing with fields which are by nature continuous, lacking discrete boundaries.

Ideally the entire field of Ontology should be able to explain the entire set of things which are conceptualized and perceived with no ambiguity. This requires tremendous insight and reflection about why do the things exist in the first place.

– Dipto Sarkar

 

Spatial Ontologies

Wednesday, February 6th, 2013

Agarwal’s “ontological considerations is GIS” left me with a lot of questions. The article attempts to outline different conceptions of ontology (both strongly theoretical and technical). Ontology is most simply defined in the final paragraph of the paper as “a systematic study of what a conceptual or formalized model should encapsulate to represent reality”. However, how do we translate personal ontologies into more global technologies? The paper briefly questions what it means to produce an ontology including concepts with variable semantics, that may be vague or differently understood by geographers and those outside the domain. The fractures between disciplines point to the inefficacy of a top-down approach to producing ontologies. Agarwal is correct to question this paradigm, noting the benefits and disadvantages of its counterpart.
Agarwal’s discourse, however, seems to be still firmly couched in the academic context. What would it mean to create a bottom-up ontology of more partipatory platforms? How might we make semantics less fuzzy in the case of non-professional conceptual knowledge? Is it possible, and more importantly, is it even desirable? At the risk of sounding like a broken record, I want again to interrogate the power dynamicsthat inform what becomes a part of what we want to represent reality. There are inherent cultural biases in what we will want to represent, and by maintaining a basis of reality defined by academics, we ignore ontologies that fit outside of dominant strains of thought.

Wyatt

Statistics and GIS- a lot has changed

Tuesday, February 5th, 2013

A lot has changed in the last 2 decades since the paper on “Spatial Statistical Analysis and Geographic Information Systems” was published by Anselin and Getis. Today, the central focus of GIS is on spatial analysis and the rich set of statistical tools to perform the analysis. Today the GIS database and analysis tools are not looked upon as different software. Spatial analysis is fully integrated in GIS softwares like ArcGIS and QGIS. Furthermore, for very specialized applications, the modular or the loosely coupled approach is often employed. Software like CrimeStat uses data in established GIS Sofware format, perform analysis on them and produce results for use in GIS softwares.

When it comes to the nature of spatial data, two data models have been widely accepted namely Object based model and field/raster based model. Extensive set of analysis tools have been developed for each of them. Data heterogeneity and relation between the objects are also taken into account by slight improvements over these two models.

Exploratory Data Analysis and model driven analysis have progressed hand in hand and complement each other. While new and innovative visualization and exploration tools help in understanding the data and the problem better. Software has evolved over time to perform complex non-linear estimations required for model based analysis.

However, Statistics and GIS is an ever evolving field and newer methodologies and techniques are developed everyday which pushes the boundary of cutting edge research further and further. Newer challenges in statistical analysis include handling Big Data and community generated spatial information. How these new challenges evolve will be very interesting to observe.

-Dipto Sarkar

Spatial Statistics- Producing a canon

Tuesday, February 5th, 2013

Nelson’s summary paper on spatial stats provided a solid framework for dominant strains of thought both in the past and looking forward. One portion of the paper provided a list of important works on the subject with brief descriptions. While I found this to be something of a bizarre format for this sort of paper, I appreciate the question it raises of what might be considered canonical in technical literature. Unsurprisingly, there is discrepancy between what works different spatial statisticians deem most important as guides for newcomers. Nelson himself adds books that he feels were overlooked (or not published at time of survey) revealing he and his reviewers’ own biases.
What I am circling around can be brought to a critical question of: how do we decide what is important, and who gets to decide? This is really what we were asking when trying to peg GIS as a tool or a science. Which aspect of GIS is most important (and critically, why?). While in spatial stats, a basis of formulae and conceptual tools is necessary, where do we go from there? Once we are past the most essential technical aspects of a discipline, defining what is important becomes more subjective. In looking at this particular literature list (which is doubtless helpful to newcomers), I think it is important to question what it means to define what is to be remembered and what is to be forgotten.

Wyatt

Questioning the Possibility of Interoperability in Geovisualization

Friday, February 1st, 2013

MacEachren and Kraak’s paper on geovisualization provides a concise and critical look at the challenges facing geovisualiation’s advancement, and how they might be overcome. One thing that stuck out to me in the article was the issue of interoperability and how its absence may hamper collaboration. This is briefly mentioned at a point discussing the challenges and potential of multidisciplinary research.
The question of interoperability is certainly not simple and is based in spatial and temporal contexts, however, it is important to interrogate how lack of interoperability works in the interests of competition, both economic and academic. And how In doing so, it may in fact impede progress in the production of geovisualisation toolmaking. By producing separate technologies with different access levels, interfaces, and availabilities, interested parties may be able to develop a competitive research edge or to gain funding. When in universities today, funding is often highly competitive, the logic behind exclusivity (at least initiatially) is understandable. However, by being on the cutting edge, you leave out many potential collaborators who may be able to contribute to the geovisualization tool itself, or to its applications and theoretical development.
A question then becomes: how do we reconcile the inherently competitive nature of academia with the goals its projects purport to serve?

Wyatt

Race to the Bottom

Thursday, January 31st, 2013

GIS strives for something that seems near impossible, a blanket solution for a problem with more than one solution. Humans are fickle, subjective, and by and large ignorant in comparison to the communal wealth of knowledge at our fingertips. Thus, people, in the world of Geo-visualization, the user, are never going to be able to use just one form of representation. The variable, subject-based, method is what everyone aims for, but unless the user is allowed to actively input parameters, be it consciously or unconsciously, we will end up with the same stale result every time.
Malleable representations can only come from organic production methods, which up to now, at least in the world of computer science and GIS, do not exist. Still, MacEachren and Kraak have a positive outlook on the field. Either because they believe it is possible, or because it must be. In the very beginning they claim that it is estimated that up to 80% of all digital data include geospatial referencing, only to follow on later with the assertion that everything must exist in space; whether that is the case is still up for debate by string theorists. However, there must be a diminishing point of return. How far must one go before the field of GIS is satisfied? At the rate we’re going, it won’t be until virtual environments achieve the uncanny valley, or are able to surpass it. At which point, it won’t matter where things are located in space, as you’ll have a hard time stripping physical reality from data driven fantasy.

AMac

Making Use of Heterogeneous, Qualitative Data on the Neogeoweb

Thursday, January 31st, 2013

Source: xkcd

Elwood reviews areas of GIScience that are crying out for advancements in the wake of new, Web 2.0 technologies and the flood of big data coming with them. More and more user-generated content is being created every day, and a great deal of it is embedded with explicit or implicit geospatial information.  Two key stumbling blocks to harnessing these new sources of data are the heterogeneous, unstandardized nature of Web 2.0 content, and the qualitative nature of spatial knowledge within the content.

Heterogeneity of geospatial data on the web is an acute issue: because content is user-generated, the range of standards and conventions used in terms of communicating information is huge.  When you factor in the equally diverse, and growing, number of Web 2.0 platforms and services, things are even more daunting.  If there’s a certain class of content you’re interested in analyzing, how do you make a structure out of what is mostly unstructured text? How do you standardize when the number of standards is longer than your arm? As the XKCD comic suggests, creating a new standard us typically an exercise in futility.

The qualitative nature of spatial information and meaning is another stumbling block in the use of data from the Geoweb. As an example of this, suppose you want to geocode a (location-disabled! Ha!) tweet of mine that says “Home, finally :)”. Even if you have detailed biographical and address information about me, you will still encounter ambiguities. By “Home”, do I mean my house? My neighbourhood? My hometown? My residence while I’m at school? My parents’ region of origin where my extended family lives? How do you even begin to automate the processing of such context-dependent information? And is there any existing spatial data model that can effectively represent the concept of home?

It is often said that while Geoweb 2.0 technologies are accessible, engaging and intuitive for a much greater number of people than traditional GIS has ever been, analysis tools for this exciting new medium lag behind GIS.  This is in no small part due to the issues associated with organizing the web of heterogeneous, qualitative data that these new tools are producing.  Though this challenge looms large, Elwood is optimistic that GIScience can rise to the occasion if we take a multidisciplinary approach.

-FischbobGeo

Human Versus Machine: The User-Systems Relationship

Thursday, January 31st, 2013

Lanter and Essinger write about the cutting edge of graphical user interface (GUI) design for GIS in 1991.  This was an era where the command prompt was still in vogue as a user interface and computer use was still unwieldy for many people.  It was also a time of change as more GUI-based software such as Microsoft Windows was gaining popularity.

Though the technology Lanter and Essinger profile has aged considerably since publication of this piece, the fundamental principles underpinning systems design and the system-user interface are still as relevant as ever. The authors discuss the need to move from a systems-centric design paradigm where operation of the software simply reflects the underlying algorithms and processes as directly as possible, to a user-centered one that gels with the user’s own mental models of the tasks they are doing.  When the latter succeeds, controls and functions are more intuitive for the user, who then does not need to be bogged down by excessive documentation or cryptic commands that only make sense to the developer.

An important point concerning the user-systems interface relationship is that it is a two-way relationship. Not only should system interface design be influenced by the user’s mental models, but those same mental models can be changed by interaction with the software, especially when the UI is set up in such a way to allow learning through exploration.  Most of my computer savviness stems from just being able to explore and mess around with things on Windows 95 and Macintosh machines starting at a young age.  My expertise with touchscreen devices, on the other hand, is far less developed. I am terrified of something happening to my Android phone, because I don’t know enough about the underlying system to be able to diagnose and solve problems as I can do with a more traditional PC environment—at 21 years old, I’m already set in my ways!

This has important implications for software developers who wish to advance human-computer interaction but find themselves faced with a generation of users most comfortable with the keyboard/mouse.  UI advances must be implemented and deployed incrementally so this cohort’s mental models have the opportunity to adjust. Unless a very ‘intuitive’ design is found, too-radical changes are bound to fail and be looked back on as being ahead of their time.

-FischbobGeo

User Centered GUIs

Thursday, January 31st, 2013

My initial reaction was to question how GUI’s make people even more distant with computer technology. If GUIs are made so precisely such that the underlying technology is completely hidden from the users, then you can run into problems where users click randomly without really understanding what the tool is doing.  But after reading further down, it gets more complicated than just button mashing, hiding algorithms, and hiding all the techy things the user never signed up for  (obviously). At the core of topic was to create a successful user centered interface – a marriage of what the users knowledge and the process maps/models in their mind and how the tool can adapt to that to further add/influence the user’s model. A great concept that Lanter argues will reduce the necessary documentation, software support and unnecessary brain space traditional GUIs demand. However, for which user should the program be created for? How can the program be created for a beginner who is just learning the tools and concepts necessary to navigate through the problem, and expert users who would like the ability to create custom functions? Or what about people to visualize conceptualize information different. Like Professor Sengupta’s analogy about road directions  (western countries may be more familiar with “next left, next right, continue easterly” type of directions while Asian countries are more familiar with landmark directions), this also applies to developing GUIs that tailor to the dominant norms in a particular society. For instance, some Asian societies traditionally read from right to left thus having menu bars and text left justified in less intuitive for them and forces to adapt.

A prime example of user interface GUI within the geography realm is the upgrade from ENVI Classic and the new ENVI 5. The designers of ENVI 5 definitely made the connection that those using ENVI are very likely to be using ArcGIS, and gave the user interface a very familiar ArcGIS feel.  For the users that this applies to, I think that this does reduce the steep learning curve of the tool, while perhaps enhancing the notion of the interactiveness of these two tools.

-tranv

How should we use Eye Tracking?

Thursday, January 31st, 2013

This paper by Poole presented some very cool and interesting technologies that allow us to follow eye movements and relate them to visual interpretation.  While this technology seems very “out there” to me at times, there are definitely some practical applications that I can think of.

With the increased amount of information and data available to us, we must filter out what we don’t want to see. With the advent of digital earths and virtual environments being the medium through which we view spatial information (business locations, road networks etc…), there may be a point where the amount of information is just too much to handle for one person.  Using eye tracking technology, experts could figure out how to best present certain types of spatial data (e.g. houses vs. business).  Then, as a user we could pick and choose which data we want to see.

For example, say I want to look at all the business in an area, filtering out residential and other land uses.  Knowing how our eyes react to different representations, the interface could “highlight” the business locations.   Theoretically, it would save the user time from having to scan through all the information they don’t want to see.

Through these kinds of things, eye tracking could extremely improve the way virtual worlds are presented.  Oddly, it may even allow us to view parts of the real world in a more “efficient” manner than in reality.

-Geogman15

 

Geovisualization: We’ve come far but there is still work to be done

Thursday, January 31st, 2013

The challenges posed by MacEachren and Kraak are manifold, but they effectively outline the path that geovisualization has taken since the paper was written in 2000.  The increased use of the World Wide Web, increases in hardware and software technology as well as focuses on relevant theory has solidified geovisualization as an extremely useful and widespread field.  The huge increase in multi-dimensional, dynamic and interactive visual maps has allowed for a broader visualization and analysis of geospatial data.

One aspect that I think still presents a challenge today is the incorporation of group work and the extension of tools to all people that need it.  This brings us back to the notion of the digital divide that exists in our world.  As western based academics, pushing to the frontier of geovisualization is in line with what we already know and understand.  However, in parts of the world where the internet may not be as accessible, some of these newer features may not be as well understood.  As a result, a lot of new tools and programs may need to be adapted to certain people’s cognitive abilities.  Additionally, everyone interprets the space around them in a unique manner, calling for an in depth analysis of how geovisualization could help them specifically.

Despite this, I believe we have made huge leaps in geovisualization since this paper has been written.  Citizens can use many different applications that were only ideas to experts a few years ago.  Still, one of the main issues today is extending the progress that has been made to all corners of the globe.

-Geogman15

 

GUIs, GIS, Geoweb

Thursday, January 31st, 2013

Lanter and Essinger’s paper, “User-Centred Graphical User Interface Design for GIS,” outlines the development from a typical user interface, to a graphical user interface, and finally to a user-centred graphical user interface. The biggest take-home point I gathered from the article was that the user interface must meet the user’s conceptual model of how the system is supposed to work. If the UI and the user’s model match up, then using the software becomes much more intuitive to the user, resulting in a minimal need for instruction manuals and other forms of support. It got me thinking about how quickly a preconceived conceptual model can be erased and/or replaced. Take switching operating systems for example—going from PC to Mac we already have a mental map of how to do basic computer tasks (how to find and transfer files, format the screen, etc), but these things are done differently on each system. Somehow we grow accustomed to the new operating system’s UI, and it will eventually replace our previous conceptual framework.

Following Elwood’s article and a call for a new framework for geovisualisation, it may be interesting to think about how our GIS conceptual frameworks will hold up in the new paradigm. The GUIs for geovisualisation are arguably easier to use than a traditional GIS (the idea of making it a public technology rather than an expert technology), so it follows that the GUI will fall into GIS users existing conceptual frameworks. Going the other way—starting with geovisualisation technologies and branching into traditional GIS—or even going back to GIS after extensive geowebbing—may be harder.

-sidewalkballet