Posts Tagged ‘human’

HCI, Cognition, Systems and Designing Better GIS

Wednesday, March 21st, 2012

Mordechai Haklay and Carolina Tobon provide an interesting overview of the use of GIS by non-experts, with a good focus on how public participation in GIS continues to shape the actual GIS systems in a manner that makes them more accessible and easy to use. In particular, I find their section on the workshops they conducted (582-588) to evaluate the usability of a systems pretty interesting, especially the authors work testing the London Borough of Wandsworth’s new platform. In particular, findings on the need to integrate aerial photos for less sophisticated map users and the need for the system to give feedback to users to confirm they had completed a task struck me as simple, intuitive adjustments many systems leave out. Of course, something as simple as feedback to confirm a task may seem like an obvious part to be included in any system, but I can think of a great many online programs and forms which fail to do this and often leave me wondering if my work/response has been saved.

One of the more interesting aspects of the topic of human-computer interaction, for me, when thinking about it in terms of GIS, includes the way it sits at the intersection of geospatial cognition and geospatial cyberinfrastructure. Perhaps I am biased by my own interests, but this topic pulls these two previous ideas from our class together nicely, as it relies on both to make many of its most salient points. However, one question I had, after reading this paper and discussing cognition in class, remains how do we test geospatial cognition in such a manner that we can apply our findings to better systems design. Often, the field of geospatial cognition seems more obsessed with exploring the ways in which humans understand space and engage in way-finding behavior. I’d be interested in seeing articles/research that really digs into actually applying psychological findings to systems design in a manner that goes beyond the testing these authors have done. I should say they do a nice job, though, of summarizing the theory of how cognitive processes like “issues such as perception, attention, memory, learning and problem solving and [] can influence computer interface and design” (569). Yet I don’t see these concepts applied directly in their testing – perhaps it’s just not covered extensively.

I think it’s only in this way that we can truly bridge the gap between humans and computers. Or is it, humans and networks of computers? Or humans and the cloud? Or humans and the manner in which computers visualize data, represent scale and provide information about the levels of uncertainty? As one might conjecture, the topic of human/computer interaction may be limitless depending on what angle we approach it from.
–ClimateNYC